...
(Openstack will tell you which device path your volume is mounted when you attach your volume to your instance).
<<TODO: screenshots or video on adding storage>>Here's a quick video on how to add volumes to Openstack:
Adding a volume to Openstack - Demo
GlusterFS setup
Once you have your infrastructure running and your RAW volumes mounted to your kubernetes nodes, deploy your Heketi / GlusterFS infrastructure. You can use the scripts included in OOM to automate this in your lab. (Not recommended for production install).
Demo Video
You can watch the execution of the script as per below instructions here:
GlusterFS Script Demo Video
Downloading the scripts
There are 2 scripts contained within OOM resources: deploy_glusterfs.bash, which will set up your initial GlusterFS infrastructure. This will also deploy a Heketi pod, which is the RestAPI interface to manage your GlusterFS cluster.
...
Grab the OOM artifacts from Gerrit (we did this on the Rancher master node in our deployment).
Currently the scripts are available by cherry-picking the following changeset: https://gerrit.onap.org/r/#/c/67049/
e.g. Run this after downloading Cassandra OOM:
git fetch https://gerrit.onap.org/r/oom refs/changes/49/67049/1 && git cherry-pick FETCH_HEAD
Code Block | ||||
---|---|---|---|---|
| ||||
git clone http://gerrit.onap.org/r/oom #git fetch https://gerrit.onap.org/r/oom refs/changes/49/67049/1 && git cherry-pick FETCH_HEAD cd oom/kubernetes cd onapcontrib/resources/scripts ls -l |
Code Block | ||||
---|---|---|---|---|
| ||||
total 16 -rw-r--r-- 1 root root 1612 Sep 11 16:18 cleanup_gluster.bash -rw-r--r-- 1 root root 9956 Sep 11 18:51 deploy_glusterfs.bash |
...
Code Block | ||||
---|---|---|---|---|
| ||||
iptables -N HEKETI iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 24007 -j ACCEPT iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 24008 -j ACCEPT iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT iptables -A HEKETI -p tcp -m state --state NEW -m multiport --dports 49152:49251 -j ACCEPT service iptables save modprobe dm_snapshot modprobe dm_mirror modprobe dm_thin_pool lsmod |egrep 'dm_snapshot|dm_mirror|dm_thin_pool' |
<<TODO: video on running script>>
Validation
Once the script is finished, check to make sure you have a valid StorageClass defined, and GlusterFS/Heketi Pods running on each Kubernetes node:
...
Code Block | ||||
---|---|---|---|---|
| ||||
vim ~oom/kubernetes/onap/resources/environments called /values_global_gluster.yaml |
...
Code Block | ||||
---|---|---|---|---|
| ||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE dev-appc-data-dev-appc-0 Bound pvc-472a9868-b5e7-11e8-ab3b-028b3e95b074 2Gi RWO glusterfs-sc 2h dev-appc-db-mysql-dev-appc-db-0 Bound pvc-4724df2b-b5e7-11e8-ab3b-028b3e95b074 2Gi RWO glusterfs-sc 2h |
Ensure your pods are running and bound to your persistent volume:
e.g.
Code Block | ||||
---|---|---|---|---|
| ||||
kubectl describe pod dev-appc-db-0 --namespace onap |
Make sure the pod is "Status: Running" and it is using the persistent volume that is associated with your storageClass:
Code Block | ||||
---|---|---|---|---|
| ||||
Name: dev-appc-db-0
Namespace: onap
Node: k8s-steve-1/10.0.0.29
Start Time: Tue, 11 Sep 2018 17:23:09 +0000
...snip...
Status: Running
...snip...
Volumes:
dev-appc-db-mysql:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: dev-appc-db-mysql-dev-appc-db-0
ReadOnly: false
...snip.... |