Tip |
---|
You can skip this step if your Kubernetes cluster deployment is on a single VM. |
...
When setting up a Kubernetes cluster, the folder /dockerdata-nfs must be shared between all of the Kubernetes worker nodes. This folder is used as a volume by the ONAP pods to share data, so there can only be one copy.
...
On this page we will attempt to do this by setting up an NFS server on the Kubernetes master node VM Master and then mount the exported shared directory on each of the all Kubernetes worker nodes' VMs.
These instruction where written using VMs created from a ubuntu-16.04-server-cloudimg-amd64-disk1 image.
Table of Contents |
---|
...
Any user can be used to run the steps in this page, as all the commands are "sudo".
Table of Contents |
---|
On the NFS Server VM (Kubernetes Master Node)
The actual /dockerdata-nfs folder will live on the Kubernetes master node VM. Create the directory as root on the Kubernetes master node VM.Master node which will also be running the NFS server to export this folder.
Set up the /dockerdata-nfs Folder
Choose one of the following to create the /dockerdata-nfs folder on this VM:
Use local directory | Run the following command as root:
|
---|
...
| |
Use separate volume | Following instruction from Create an OpenStack Volume |
---|
...
to: |
- create a OpenStack volume
- attach the volume to the Kubernetes master node VM
- mount the attached volume to directory /dockerdata-nfs on the Kubernetes master node VM
...
(where the VM Instance is the one that you have chosen) |
Setup the NFS Server and Export /dockerdata-nfs Folder
Execute the following commands as ubuntu user.
Code Block | ||||
---|---|---|---|---|
| ||||
sudo apt update sudo apt install nfs-kernel-server sudo mkdir /export sudo mkdir /export/dockerdata-nfs sudo chmod 777 /export sudo chmod 777 /export/dockerdata-nfs sudo vi /etc/exports # append "the following /dockerdata-nfs *(rw,no_root_squash,no_subtree_check)" sudo vi /etc/fstab # append "/home/ubuntu/dockerdata-nfs /dockerdata-nfs none bind 0 0" sudo service nfs-kernel-server restart |
Mount the dockerdata-nfs folder on each of the Kubernetes node VMs
...
service nfs-kernel-server restart |
Expand | ||
---|---|---|
| ||
$ ps -ef|grep nfs |
On the other VMs (Kubernetes Worker Nodes)
Mount the /dockerdata-nfs Folder
On each of the Kubernetes worker nodes, mount the /dockerdata-nfs folder that is being served from the Kubernetes master VM. . Run the followings as ubuntu user.
Code Block | ||||
---|---|---|---|---|
| ||||
sudo apt update sudo apt install nfs-common -y sudo mkdir /dockerdata-nfs sudo chmod 777 /dockerdata-nfs #Option# Option 1: sudo mount -t nfs -o proto=tcp,port=2049 <host|IP<hostname ofor k8sIP masteraddress node vm>:/dockerdata-nfs /dockerdata-nfs nfs auto 0 0of NFS server>:/dockerdata-nfs /dockerdata-nfs sudo vi /etc/fstab # append "<host|the following <hostname or IP address of k8s master node vm>NFS server>:/dockerdata-nfs /dockerdata-nfs nfs auto 0 0" # #OptionOption 2: (verified on Ubuntu 16.04 AWS EC2 EBS volume) sudo vi /etc/fstab # cdrancher.onap.info sudo vi /etc/fstab # append the following line. <hostname or IP address of NFS server>:/dockerdata-nfs /dockerdata-nfs nfs auto 0 0 # run the following line sudo mount -a |
Tips
...
Verify it :
Tocuh a file inside /dockerdata-nfs directory on the Kubernetes Master and check to see if the same file is found under /dockerdata-nfs on all Kubernetes worker nodes.
Unmount the share directory
Use the lazy (-l) option on Kubernetes worker nodes to force unmount the mount point.
...