...
Once you have your Rancher and Kubernetes environment running on your Openstack environment, you will need to add raw volumes to each Kubernetes node from Openstack console.
<<TODO: screenshots or video on adding storage>>
...
In this document, we assume that the volumes were mounted on /dev/vdb.
(Openstack will tell you which device path your volume is mounted when you attach your volume to your instance).
Here's a quick video on how to add volumes to Openstack:
Adding a volume to Openstack - Demo
GlusterFS setup
Once you have your infrastructure running and your RAW volumes mounted to your kubernetes nodes, deploy your Heketi / GlusterFS infrastructure. You can use the scripts included in OOM to automate this in your lab. (Not recommended for production install).
Demo Video
You can watch the execution of the script as per below instructions here:
GlusterFS Script Demo Video
Downloading the scripts
There are 2 scripts contained within OOM resources: deploy_glusterfs.bash, which will set up your initial GlusterFS infrastructure. This will also deploy a Heketi pod, which is the RestAPI interface to manage your GlusterFS cluster.
There is also a cleanup script that you can use to cleanup your GlusterFS infrastructure when you are done or would like to re-deploy a clean GlusterFS infrastructure.
Grab the OOM artifacts from Gerrit (we did this on the Rancher master node in our deployment).
Currently the scripts are available by cherry-picking the following changeset: https://gerrit.onap.org/r/#/c/67049/
e.g. Run this after downloading Cassandra OOM:
git fetch https://gerrit.onap.org/r/oom refs/changes/49/67049/1 && git cherry-pick FETCH_HEAD
Code Block | ||||
---|---|---|---|---|
| ||||
git clone http://gerrit.onap.org/r/oom
#git fetch https://gerrit.onap.org/r/oom refs/changes/49/67049/1 && git cherry-pick FETCH_HEAD
cd oom/kubernetes
cd contrib/resources/scripts
ls -l |
Code Block | ||||
---|---|---|---|---|
| ||||
total 16
-rw-r--r-- 1 root root 1612 Sep 11 16:18 cleanup_gluster.bash
-rw-r--r-- 1 root root 9956 Sep 11 18:51 deploy_glusterfs.bash |
Code Block | ||||
---|---|---|---|---|
| ||||
bash deploy_glusterfs.bash |
Code Block | ||||
---|---|---|---|---|
| ||||
deploy_glusterfs.bash: usage: deploy_gluster.bash <dev path> namespace
e.g. deploy_glusterfs.bash /dev/vdb onap
This script deploys a GlusterFS kubernetes on OpenStack to be used as Persistent Volumes |
Code Block | ||||
---|---|---|---|---|
| ||||
bash deploy_glusterfs.bash /dev/vdb onap |
The script will prompt you to hit "Enter" every once in a while to let you confirm there are no errors. There is minimal error checking in this script, so pay attention, especially when you are re-running the script after previously deploying GlusterFS.
The script will start off to direct you to run some commands manually on the other Kubernetes worker nodes (Openstack VMs):
Code Block | ||||
---|---|---|---|---|
| ||||
iptables -N HEKETI
iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 24007 -j ACCEPT
iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 24008 -j ACCEPT
iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT
iptables -A HEKETI -p tcp -m state --state NEW -m multiport --dports 49152:49251 -j ACCEPT
service iptables save
modprobe dm_snapshot
modprobe dm_mirror
modprobe dm_thin_pool
lsmod |egrep 'dm_snapshot|dm_mirror|dm_thin_pool'
|
Validation
Once the script is finished, check to make sure you have a valid StorageClass defined, and GlusterFS/Heketi Pods running on each Kubernetes node:
(Pod names and IP addresses will vary)
Code Block | ||||
---|---|---|---|---|
| ||||
kubectl get pods --namespace onap
kubectl describe sc --namespace onap
kubectl get service --namespace onap |
e.g
Code Block | ||||
---|---|---|---|---|
| ||||
NAME READY STATUS RESTARTS AGE
glusterfs-cxqc2 1/1 Running 0 4d
glusterfs-djq4x 1/1 Running 0 4d
glusterfs-t7cj5 1/1 Running 0 4d
glusterfs-z4vk6 1/1 Running 0 4d
heketi-5876bd4875-hzw2d 1/1 Running 0 4d
Name: glusterfs-sc
IsDefaultClass: No
Annotations: <none>
Provisioner: kubernetes.io/glusterfs
Parameters: resturl=http://10.43.185.167:8080,restuser=,restuserkey=
Events: <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
heketi ClusterIP 10.43.185.167 <none> 8080/TCP 4d
heketi-storage-endpoints ClusterIP 10.43.227.203 <none> 1/TCP 4d |
Deploy ONAP with OOM
You can choose any of the documented methods on this site or on onap.readthedocs.io, but here is a brief example of how you can deploy ONAP with GlusterFS.
Note: any persistent storage technology can be used in the example going forward, just make sure you have a storageClass already defined.
Edit / validate your values.yaml file
There is a custom values file in ~oom/kubernetes/onap/resources/environments called values_global_gluster.yaml that can be used, or you can edit the master values.yaml file at ~oom/kubernetes/onap/values.yaml.
We will assume that you are doing the former.
Ensure that you have your storageClass defined in the global section of your values file:
Code Block | ||||
---|---|---|---|---|
| ||||
vim ~oom/kubernetes/onap/resources/environments/values_global_gluster.yaml |
Ensure you have your storage class defined within global:persistence section:
Code Block | ||||
---|---|---|---|---|
| ||||
global:
# Change to an unused port prefix range to prevent port conflicts
...snip...
# default mount path root directory referenced
# by persistent volumes and log files
persistence:
storageClass: glusterfs-sc
mountPath: /dockerdata-nfs |
Enable any components you want to deploy. In the values_global_gluster.yaml file, they are disabled by default. E.g. to enable APPC:
Code Block | ||||
---|---|---|---|---|
| ||||
aaf:
enabled: false
aaf-sms:
aai:
enabled: false
appc:
enabled: true
config:
openStackType: OpenStackProvider
openStackName: OpenStack
openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html
openStackServiceTenantName: default
openStackDomain: default
openStackUserName: admin
openStackEncryptedPassword: admin
clamp:
enabled: false
cli:
enabled: false |
Once you have properly edited your values file, deploy ONAP:
Code Block | ||||
---|---|---|---|---|
| ||||
cd ~/oom/kubernetes
helm upgrade -i dev local/onap --namespace onap -f onap/resources/environments/values_global_gluster.yaml
|
Wait until components are up, and validate that your persistent volumes are using your persistent storageClass:
Code Block | ||||
---|---|---|---|---|
| ||||
kubectl get pvc --namespace onap |
You should see "glusterfs-sc" (or whatever storageClass name you chose) under the STORAGECLASS column:
Code Block | ||||
---|---|---|---|---|
| ||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
dev-appc-data-dev-appc-0 Bound pvc-472a9868-b5e7-11e8-ab3b-028b3e95b074 2Gi RWO glusterfs-sc 2h
dev-appc-db-mysql-dev-appc-db-0 Bound pvc-4724df2b-b5e7-11e8-ab3b-028b3e95b074 2Gi RWO glusterfs-sc 2h |
Ensure your pods are running and bound to your persistent volume:
e.g.
Code Block | ||||
---|---|---|---|---|
| ||||
kubectl describe pod dev-appc-db-0 --namespace onap |
Make sure the pod is "Status: Running" and it is using the persistent volume that is associated with your storageClass:
Code Block | ||||
---|---|---|---|---|
| ||||
Name: dev-appc-db-0
Namespace: onap
Node: k8s-steve-1/10.0.0.29
Start Time: Tue, 11 Sep 2018 17:23:09 +0000
...snip...
Status: Running
...snip...
Volumes:
dev-appc-db-mysql:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: dev-appc-db-mysql-dev-appc-db-0
ReadOnly: false
...snip.... |