Development Cluster Configuration: Difference between revisions
DrEdWilliams (talk | contribs) mNo edit summary |
DrEdWilliams (talk | contribs) mNo edit summary |
||
| Line 30: | Line 30: | ||
| Storage Server || storage1 || /ws_data_2 || local 5x4TB RAID5 array || 15T | | Storage Server || storage1 || /ws_data_2 || local 5x4TB RAID5 array || 15T | ||
|} | |} | ||
Systems that require access to both the development filesystem ('''/workspace''') and the production filesystem ('''/shared''') require a [[BeeGFS Installation#Mounting multiple filesystems on the same client|special client configuration]]. | |||
=== Dashboard Token === | === Dashboard Token === | ||
Revision as of 23:26, 7 August 2019
These packages form the basic functionality of the development cluster. The packages need to be installed in this order to preserve the IP address assignments. If the order is changed (or one is left out) you may need to review and modify IP assignments in the DNS server on the firewall.
Scripts & config files are checked into gitlab under the Kubernetes group project listed.
| activity | gitlab | script/procedures/config | IP | hostname(s) |
|---|---|---|---|---|
| BeeGFS Installation | install the parallel filesystem components on controller & nodes to support the /workspace filesystem | |||
| NVIDIA device plugin | https://github.com/NVIDIA/k8s-device-plugin | |||
| Dynamic Provisioning | k8s-admin | (k8s-admin wiki) | ||
| Harbor Registry | k8s-admin | 10.0.0.201 | harbor-dev.williams.localnet |
Storage
The production cluster depends on the /workspace filesystem for its persistent storage. The BeeGFS components are installed as shown here:
| component | system | location | storage | size |
|---|---|---|---|---|
| Management Server | controller | /beegfs/beegfs-mgmtd | local SSD | ~200G (shared with OS) |
| Metadata Server | controller | /beegfs/beegfs-meta | local SSD | ~200G (shared with OS) |
| Storage Server | controller | /ws_data/beegfs/beegfs-storage | mounted from Equalogic array | 7.9T |
| Storage Server | storage1 | /ws_data_2 | local 5x4TB RAID5 array | 15T |
Systems that require access to both the development filesystem (/workspace) and the production filesystem (/shared) require a special client configuration.
Dashboard Token
Obtain the current dashboard token with this command:
kubectl -n kube-system describe secrets \
`kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` \
| awk '/token:/ {print $2}'
The current Development cluster dashboard token is:
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVycm9sZS1hZ2dyZWdhdGlvbi1jb250cm9sbGVyLXRva2VuLTd0djljIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXJyb2xlLWFnZ3JlZ2F0aW9uLWNvbnRyb2xsZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwNmI3ZmRhNy0xNmMxLTExZTktOTM4Yi0wMDAxNmM2NmIzMDkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3RlcnJvbGUtYWdncmVnYXRpb24tY29udHJvbGxlciJ9.JxxEqoMKAbcm8WA0laeMIQj5ul0ciG1Am2oMnYQkV_MBKtEFS6FrXlRlVVpajRk-A8CeD7KQLQv0M5-fzGsER68-MPzu7JpSE2qbQXzCEbdz__MxfAhOoF1gujzpQZJKYMbK5xbsKhWrII-rLZ_AXqvYbpdgZdyUQrey8CiHPJA3PO7lTR8hf-c1QOU82v1prdjWzjAss1FK2mazISyzdOmnMYMNqARiEKAMqJx2d7iesnlFUPHA7Wff-Xot4X3WsFM3yxeOcJXsFGa3EVgTroXVdkKuqSx2fMGFckXyX6bF_nVrb2wH863GR99sl2TthdKZAuGqwRr-K2wirtNiIw
Kubernetes Node Join Command
kubeadm join 10.0.0.60:6443 --token yz1d07.k1sldcb4xlvs5b0i \
--discovery-token-ca-cert-hash sha256:4fcf88ee9314e63b7697c21957fc2056c8d7303975fd322e2c2c4c54c04e8e20 \
--ignore-preflight-errors Swap --node-name=`hostname -s`