Development Cluster Configuration: Difference between revisions

From WilliamsNet Wiki
Jump to navigation Jump to search
mNo edit summary
Tag: visualeditor
 
(15 intermediate revisions by the same user not shown)
Line 1: Line 1:
These packages form the basic functionality of the development cluster.  The packages need to be installed in this order to preserve the IP address assignments.  If the order is changed (or one is left out) you may need to review and modify IP assignments in the DNS server on the firewall.
The Development cluster is deployed using [[K3s - Kubernetes Simplified]].
 
These packages form the basic functionality of the development cluster.  The packages need to preserve the IP address assignments.   


Scripts & config files are checked into gitlab under the Kubernetes group project listed.
Scripts & config files are checked into gitlab under the Kubernetes group project listed.
Line 6: Line 8:
|-
|-
! activity !! gitlab !! script/procedures/config !! IP !! hostname(s)
! activity !! gitlab !! script/procedures/config !! IP !! hostname(s)
|-
| [[BeeGFS Installation]] || install the parallel filesystem components on controller & nodes to support the /workspace filesystem || || ||
|-
|-
| NVIDIA device plugin || || https://github.com/NVIDIA/k8s-device-plugin || ||
| NVIDIA device plugin || || https://github.com/NVIDIA/k8s-device-plugin || ||
|-
|-
| [[Dynamic Provisioning]] || k8s-admin || (k8s-admin wiki) || ||
| GitLab - Helm deployment || [https://gitlab.dev.williams.localnet/admin/projects/k8s/gitlab Kubernetes/gitlab] || kubernetes/gitlab/helm || 10.0.0.203 || gitlab.dev.williams.localnet
|-
| [[Harbor Registry]] || k8s-admin || || 10.0.0.201 || harbor-dev.williams.localnet
|}
|}


=== Storage ===
=== Storage ===
The production cluster depends on the '''/workspace''' filesystem for its persistent storage.  The BeeGFS components are installed as shown here:
The production cluster uses Rook/Ceph for its persistent storage.  The storage components are arranged:
{| class="wikitable"
{| class="wikitable"
|-
|-
! component !! system !! location !! storage !! size
! component !! system !! storageclass !! storage !! size
|-
| Management Server || controller || /beegfs/beegfs-mgmtd || local SSD || ~200G (shared with OS)
|-
|-
| Metadata Server || controller || /beegfs/beegfs-meta || local SSD || ~200G (shared with OS)
| Storage Server || storage1 || rook-ceph || local 5x4TB drives || 20TB (ceph)
|-
| Storage Server || controller || /ws_data/beegfs/beegfs-storage || mounted from Equalogic array || 7.9T
|-
| Storage Server || storage1 || /ws_data_2 || local 5x4TB RAID5 array || 15T
|}
|}
Systems that require access to both the development filesystem ('''/workspace''') and the production filesystem ('''/shared''') require a [[BeeGFS Installation#Mounting multiple filesystems on the same client|special client configuration]].
=== Dashboard Token ===
Obtain the current dashboard token with this command:
kubectl -n kube-system describe secrets \
    `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` \
    | awk '/token:/ {print $2}'
The current Development cluster dashboard token is:
    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVycm9sZS1hZ2dyZWdhdGlvbi1jb250cm9sbGVyLXRva2VuLTd0djljIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXJyb2xlLWFnZ3JlZ2F0aW9uLWNvbnRyb2xsZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwNmI3ZmRhNy0xNmMxLTExZTktOTM4Yi0wMDAxNmM2NmIzMDkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3RlcnJvbGUtYWdncmVnYXRpb24tY29udHJvbGxlciJ9.JxxEqoMKAbcm8WA0laeMIQj5ul0ciG1Am2oMnYQkV_MBKtEFS6FrXlRlVVpajRk-A8CeD7KQLQv0M5-fzGsER68-MPzu7JpSE2qbQXzCEbdz__MxfAhOoF1gujzpQZJKYMbK5xbsKhWrII-rLZ_AXqvYbpdgZdyUQrey8CiHPJA3PO7lTR8hf-c1QOU82v1prdjWzjAss1FK2mazISyzdOmnMYMNqARiEKAMqJx2d7iesnlFUPHA7Wff-Xot4X3WsFM3yxeOcJXsFGa3EVgTroXVdkKuqSx2fMGFckXyX6bF_nVrb2wH863GR99sl2TthdKZAuGqwRr-K2wirtNiIw
=== Kubernetes Node Join Command ===
<pre>
kubeadm join 10.1.0.10:6443 --token qmbipv.zu9a88gbg81on5rv \
    --discovery-token-ca-cert-hash sha256:244c33a3b6ede007ad585ff9d0b608a9598bb17013195e162e819e9553755b590 \
    --ignore-preflight-errors Swap --node-name=`hostname -s`
</pre>

Latest revision as of 11:26, 29 October 2021

The Development cluster is deployed using K3s - Kubernetes Simplified.

These packages form the basic functionality of the development cluster. The packages need to preserve the IP address assignments.

Scripts & config files are checked into gitlab under the Kubernetes group project listed.

activity gitlab script/procedures/config IP hostname(s)
NVIDIA device plugin https://github.com/NVIDIA/k8s-device-plugin
GitLab - Helm deployment Kubernetes/gitlab kubernetes/gitlab/helm 10.0.0.203 gitlab.dev.williams.localnet

Storage[edit]

The production cluster uses Rook/Ceph for its persistent storage. The storage components are arranged:

component system storageclass storage size
Storage Server storage1 rook-ceph local 5x4TB drives 20TB (ceph)