/
4. Creating Persistent Volume on Federation Host Cluster

4. Creating Persistent Volume on Federation Host Cluster

This step is optional and should be followed only if persistent storage for etcd is not disabled using "--etcd-persistent-storage=false" parameter during federation control plane deployment (Refer to step 6).

We intend to create 2 persistent volume (pv) with different "access mode" on the host cluster's Master node (i.e. kubefed-1 here). 

#Assuming "enterprise" will be the federation cluster's context name, we pick a relevant name for the pv.   #Create a new file ubuntu@kubefed-1:~# cat <<EOF > pv-volume1.yaml kind: PersistentVolume apiVersion: v1 metadata: annotations: volume.alpha.kubernetes.io/storage-class: "yes" name: enterprise-apiserver-etcd-volume1 namespace: federation-system labels: app: federated-cluster type: local spec: capacity: storage: 11Gi accessModes: - ReadWriteMany hostPath: path: "/mnt/data" EOF ubuntu@kubefed-1:~# kubectl create -f pv-volume1.yaml persistentvolume "enterprise-apiserver-etcd-volume1" created ubuntu@kubefed-1:~# #Create a new file ubuntu@kubefed-1:~# cat <<EOF > pv-volume2.yaml kind: PersistentVolume apiVersion: v1 metadata: annotations: volume.alpha.kubernetes.io/storage-class: "yes" name: enterprise-apiserver-etcd-volume2 namespace: federation-system labels: app: federated-cluster type: local spec: capacity: storage: 11Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" EOF ubuntu@kubefed-1:~# kubectl create -f pv-volume2.yaml persistentvolume "enterprise-apiserver-etcd-volume2" created ubuntu@kubefed-1:~# #verify pv status is "Available" ubuntu@kubefed-1:~# kubectl get pv --all-namespaces | grep enterprise NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE enterprise-apiserver-etcd-volume1 11Gi RWX Retain Available 35m enterprise-apiserver-etcd-volume2 11Gi RWO Retain Available 45m ubuntu@kubefed-1:~#  



Mounting directory between site master node and federation master node

This step should be done on all sites that are part of kubernetes control plane (site-1 and site-2). It should be executed on each site's master node using root user. This will mount /dockerdata-nfs directory on each of site's master node.

root@k8s-s3-master:~# sudo mount -t nfs -o proto=tcp,port=2049 <ip address of federation kubernetes cluster master node>:/dockerdata-nfs /dockerdata-nfs
root@k8s-s3-master:~# sudo vi /etc/fstab
#append below line. The IP address could be that of federation server.
<hostname or IP address of NFS server>:/dockerdata-nfs /dockerdata-nfs    nfs    auto  0  0
root@k8s-s3-master: