Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Unmount the share directory

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...


Tip

You can skip this step if your Kubernetes cluster deployment is on a single VM.

...

When setting up a Kubernetes cluster, the folder /dockerdata-nfs must be shared between all of the Kubernetes worker nodes. This folder is used as a volume by the ONAP pods to share data, so there can only be one copy.

...

On this page we will attempt to do this by setting up an NFS server on the Kubernetes Master and then mount the shared directory on all Kubernetes worker nodes. 

These instruction where written using VMs created from a ubuntu-16.04-server-cloudimg-amd64-disk1 image.

Any user can be used to run the steps in this page, as all the commands are "sudo".


Table of Contents


On the NFS Server VM (Kubernetes Master Node)

The actual /dockerdata-nfs folder will live on Kubernetes Master node which will also be running the NFS server to export this folder.

Set up the /dockerdata-nfs Folder

Choose one of the following to create the /dockerdata-nfs folder on this VM:

Use local directory

Run the following command as root:

Code Block
languagebash
#id is ubuntu
sudo mkdir -p /dockerdata-nfs
sudo chmod 777 /dockerdata-nfs


Use separate volume

Following instruction from Create an OpenStack Volume to:

(where the VM Instance is the one that you have chosen)

Setup the NFS Server and Export  /dockerdata-nfs Folder

Execute the following commands :as ubuntu user.

Code Block
languagebash
titlenfs server
sudo apt update
sudo apt  install nfs-kernel-server

sudo vi /etc/exports
# append the following
/dockerdata-nfs *(rw,no_root_squash,no_subtree_check)

sudo vi /etc/fstab 
# append the following
/home/ubuntu/dockerdata-nfs /dockerdata-nfs    none    bind  0  0

sudo service nfs-kernel-server restart

...

Expand
titleAn example of validate NFS server running

$ ps -ef|grep nfs
root 2205 2 0 15:59 ? 00:00:00 [nfsiod]
root 2215 2 0 15:59 ? 00:00:00 [nfsv4.0-svc]
root 13756 2 0 18:19 ? 00:00:00 [nfsd4_callbacks]
root 13758 2 0 18:19 ? 00:00:00 [nfsd]
root 13759 2 0 18:19 ? 00:00:00 [nfsd]
root 13760 2 0 18:19 ? 00:00:00 [nfsd]
root 13761 2 0 18:19 ? 00:00:00 [nfsd]
root 13762 2 0 18:19 ? 00:00:00 [nfsd]
root 13763 2 0 18:19 ? 00:00:00 [nfsd]
root 13764 2 0 18:19 ? 00:00:00 [nfsd]
root 13765 2 0 18:19 ? 00:00:00 [nfsd]
ubuntu 13820 23326 0 18:19 pts/0 00:00:00 grep --color=auto nfs
$


On the other VMs (Kubernetes Worker Nodes)

Mount the /dockerdata-nfs Folder

On each of the Kubernetes worker nodes, mount the /dockerdata-nfs folder. Run the followings as ubuntu user.

Code Block
languagebash
titlemount nfs mount
sudo apt update
sudo apt install nfs-common -y
sudo mkdir /dockerdata-nfs
sudo chmod 777 /dockerdata-nfs


# Option 1:
sudo mount -t nfs -o proto=tcp,port=2049 <hostname or IP address of NFS server>:/dockerdata-nfs /dockerdata-nfs
sudo vi /etc/fstab
# append the following
<hostname ofor IP address of NFS server>:/dockerdata-nfs /dockerdata-nfs   nfs    auto  0  0


# Option 2:
#  (verified on Ubuntu 16.04 AWS EC2 EBS volume)
sudo vi /etc/fstab
# append the following cdrancher.onap.infoline.
<hostname or IP address of NFS server>:/dockerdata-nfs /dockerdata-nfs   nfs    auto  0  0
# run the following line
sudo mount -a

Verify it :

Tocuh a file inside /dockerdata-nfs directory on the Kubernetes Master and check to see if the same file is found under /dockerdata-nfs on all Kubernetes worker nodes.

Unmount the share directory

Use the lazy (-l) option on Kubernetes worker nodes to force unmount the mount point.

...