This guide is a "draft" (i.e. it could be out-of-sync with the installation procedure anytime and it might not be updated here)
Infrastructure Setup
Rancher
This guide assumes a Rancher deployment, but any deployment with multiple (3+) Kubernetes nodes, with Docker and Helm will work.
This guide is based on a deployment using Windriver ONAP lab.
You can follow this guide to set up your infrastructure - but skip the NFS setup section ("Setting up an NFS share for Multinode Kubernetes Clusters") if you do.
Storage
Once you have your Rancher and Kubernetes environment running on your Openstack environment, you will need to add raw volumes to each Kubernetes node from Openstack console. In this document, we assume that the volumes were mounted on /dev/vdb.
(Openstack will tell you which device path your volume is mounted when you attach your volume to your instance).
<<TODO: screenshots or video on adding storage>>
GlusterFS setup
Once you have your infrastructure running and your RAW volumes mounted to your kubernetes nodes, deploy your Heketi / GlusterFS infrastructure. You can use the scripts included in OOM to automate this in your lab. (Not recommended for production install).
Downloading the scripts
There are 2 scripts contained within OOM resources: deploy_glusterfs.bash, which will set up your initial GlusterFS infrastructure. This will also deploy a Heketi pod, which is the RestAPI interface to manage your GlusterFS cluster.
There is also a cleanup script that you can use to cleanup your GlusterFS infrastructure when you are done or would like to re-deploy a clean GlusterFS infrastructure.
Grab the OOM artifacts from Gerrit (we did this on the Rancher master node in our deployment):
git clone http://gerrit.onap.org/r/oom cd oom/kubernetes cd onap/resources/scripts ls -l
total 16 -rw-r--r-- 1 root root 1612 Sep 11 16:18 cleanup_gluster.bash -rw-r--r-- 1 root root 9956 Sep 11 18:51 deploy_glusterfs.bash
bash deploy_glusterfs.bash
deploy_glusterfs.bash: usage: deploy_gluster.bash <dev path> namespace e.g. deploy_glusterfs.bash /dev/vdb onap This script deploys a GlusterFS kubernetes on OpenStack to be used as Persistent Volumes
bash deploy_glusterfs.bash /dev/vdb onap
The script will prompt you to hit "Enter" every once in a while to let you confirm there are no errors. There is minimal error checking in this script, so pay attention, especially when you are re-running the script after previously deploying GlusterFS.
The script will start off to direct you to run some commands manually on the other Kubernetes worker nodes (Openstack VMs):
iptables -N HEKETI iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 24007 -j ACCEPT iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 24008 -j ACCEPT iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT iptables -A HEKETI -p tcp -m state --state NEW -m multiport --dports 49152:49251 -j ACCEPT service iptables save modprobe dm_snapshot modprobe dm_mirror modprobe dm_thin_pool lsmod |egrep 'dm_snapshot|dm_mirror|dm_thin_pool'
<<TODO: video on running script>>
Validation
Once the script is finished, check to make sure you have a valid StorageClass defined, and GlusterFS/Heketi Pods running on each Kubernetes node:
(Pod names and IP addresses will vary)
kubectl get pods --namespace onap kubectl describe sc --namespace onap kubectl get service --namespace onap
e.g
NAME READY STATUS RESTARTS AGE glusterfs-cxqc2 1/1 Running 0 4d glusterfs-djq4x 1/1 Running 0 4d glusterfs-t7cj5 1/1 Running 0 4d glusterfs-z4vk6 1/1 Running 0 4d heketi-5876bd4875-hzw2d 1/1 Running 0 4d Name: glusterfs-sc IsDefaultClass: No Annotations: <none> Provisioner: kubernetes.io/glusterfs Parameters: resturl=http://10.43.185.167:8080,restuser=,restuserkey= Events: <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) heketi ClusterIP 10.43.185.167 <none> 8080/TCP 4d heketi-storage-endpoints ClusterIP 10.43.227.203 <none> 1/TCP 4d
Deploy ONAP with OOM
You can choose any of the documented methods on this site or on onap.readthedocs.io, but here is a brief example of how you can deploy ONAP with GlusterFS.
Note: any persistent storage technology can be used in the example going forward, just make sure you have a storageClass already defined.
Edit / validate your values.yaml file
There is a custom values file in ~oom/kubernetes/onap/resources/environments called values_global_gluster.yaml that can be used, or you can edit the master values.yaml file at ~oom/kubernetes/onap/values.yaml.
We will assume that you are doing the former.
Ensure that you have your storageClass defined in the global section of your values file:
vim ~oom/kubernetes/onap/resources/environments/values_global_gluster.yaml
Ensure you have your storage class defined within global:persistence section:
global: # Change to an unused port prefix range to prevent port conflicts ...snip... # default mount path root directory referenced # by persistent volumes and log files persistence: storageClass: glusterfs-sc mountPath: /dockerdata-nfs
Enable any components you want to deploy. In the values_global_gluster.yaml file, they are disabled by default. E.g. to enable APPC:
aaf: enabled: false aaf-sms: aai: enabled: false appc: enabled: true config: openStackType: OpenStackProvider openStackName: OpenStack openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html openStackServiceTenantName: default openStackDomain: default openStackUserName: admin openStackEncryptedPassword: admin clamp: enabled: false cli: enabled: false
Once you have properly edited your values file, deploy ONAP:
cd ~/oom/kubernetes helm upgrade -i dev local/onap --namespace onap -f onap/resources/environments/values_global_gluster.yaml
Wait until components are up, and validate that your persistent volumes are using your persistent storageClass:
kubectl get pvc --namespace onap
You should see "glusterfs-sc" (or whatever storageClass name you chose) under the STORAGECLASS column:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE dev-appc-data-dev-appc-0 Bound pvc-472a9868-b5e7-11e8-ab3b-028b3e95b074 2Gi RWO glusterfs-sc 2h dev-appc-db-mysql-dev-appc-db-0 Bound pvc-4724df2b-b5e7-11e8-ab3b-028b3e95b074 2Gi RWO glusterfs-sc 2h