Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Make option 2 to be more align with the overall solution .

The page details the changes and reasons in gerrit topic:SDNC-163 for SDN-C cluster (described in About SDN-C Clustering).

Table of Contents


Modify Helm Values Definition

Helm values is defined in {$OOM}/kubernetes/sdnc/values.yaml file, the following new additions have been added to this file:

FieldValuePurpose
image.mysql

mysql:5.7

define the image version of mysql
enableODLClustertrueto provide configuration clustering deployment, along with other value "false".
numberOfODLReplicas3to provide configuration clustering deployment with configurable replicas number for sdnc pod
numberOfDbReplicas2to provide configuration clustering deployment with configurable replicas number for dbhost pod


Modify Kubernetes Templates

We are using Kubernetes replicas to achieve the SDN-C cluster deployment. The SDN-C components (pod, deployment and service) deployments are defined in templates under directory {$OOM}/kubernetes/sdnc/templates.

File nameChanges

all-services.yaml

For ODL cluster:

  • Added a headless service sdnhost-cluser to enable sdnc pods in SDN-C cluster being able to find each other with the fixed FQDN directly.

    Code Block
    languagexml
    titleAdded new headless service: sdnhost-cluster
    linenumberstrue
    collapsetrue
    #{{ if .Values.enableODLCluster }}
    ---
    apiVersion: v1
    kind: Service
    metadata:
     name: sdnhost-cluster
     namespace: "{{ .Values.nsPrefix }}-sdnc"
     labels:
       app: sdnc
     annotations:
       service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
    spec:
     ports:
     - name: "sdnc-cluster-port"
       port: 2550
     clusterIP: None
     selector:
       app: sdnc
     sessionAffinity: None
     type: ClusterIP
    #{{ end }}
  • Exposed 8080 port for using jolokia

    Code Block
    languagexml
    titleExposed jolokia port (8080) in service sdnhost
    linenumberstrue
    collapsetrue
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sdnhost
      ...
    spec:
      ports:
      ...
      - name: "sdnc-jolokia-port-8080"
        port: 8280
        targetPort: 8080
        nodePort: {{ .Values.nodePortPrefix }}46

For DB cluster:

  • Added new service dbhost-read to offer reading only operations for DB access:

    Code Block
    languagexml
    titleAdded new service: dbhost-read
    linenumberstrue
    collapsetrue
    ---
    # Client service for connecting to any MySQL instance for reads.
    # Only master: sdnc-dbhost-0 accepts the write request.
    apiVersion: v1
    kind: Service
    metadata:
      name: dbhost-read
      namespace: "{{ .Values.nsPrefix }}-sdnc"
      labels:
        app: sdnc-dbhost
    spec:
      ports:
      - name: sdnc-dbhost
        port: 3306
      selector:
        app: sdnc-dbhost
  • Added new service nfs-provisioner to enable dynamic allocation of PVC

    Code Block
    languagexml
    titleAdded new service: nfs-provisioner
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: nfs-provisioner
      namespace: "{{ .Values.nsPrefix }}-sdnc"
      labels:
        app: nfs-provisioner
    spec:
      ports:
        - name: nfs
          port: 2049
        - name: mountd
          port: 20048
        - name: rpcbind
          port: 111
        - name: rpcbind-udp
          port: 111
          protocol: UDP
      selector:
        app: nfs-provisioner

dg-statefulset.yaml

  • Renamed from db-deployment.yaml file
  • Changed pod kind from "Deployment" to "StatefulSet" and set replication to 2

    FieldNew valueOld value
    .apiVersion

    apps/v1beta1 1

    extensions/v1beta1

    .kindStatefulSetDeployment
    .spec.serviceName"dbhost" 2N/A
    .spec.replicas2N/A
  • Added initContainers (init-mysql and clone-mysql) to control cluster settings

    Code Block
    languagexml
    titleNew initContainers: init-mysql, clone-mysql
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncDbhost }}
    apiVersion: apps/v1beta1
    kind: StatefulSet
    metadata:
      name: sdnc-dbhost
      ...
    spec:
      serviceName: "dbhost"
      replicas: {{ .Values.numberOfDbReplicas }}
      ...
      template:
        ...
        spec:
          initContainers:
          - name: init-mysql
            image: {{ .Values.image.mysql }}
            imagePullPolicy: {{ .Values.pullPolicy }}
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Generate mysql server-id from pod ordinal index.
              [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              echo BASH_REMATCH=${BASH_REMATCH}
              echo [mysqld] > /mnt/conf.d/server-id.cnf
              # Add an offset to avoid reserved server-id=0 value.
              echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
              # Copy appropriate conf.d files from config-map to emptyDir.
              if [[ $ordinal -eq 0 ]]; then
                cp /mnt/config-map/master.cnf /mnt/conf.d/
              else
                cp /mnt/config-map/slave.cnf /mnt/conf.d/
              fi
            volumeMounts:
            - name: conf
              mountPath: /mnt/conf.d
            - name: config-map
              mountPath: /mnt/config-map
          - name: clone-mysql
            image: {{ .Values.image.xtrabackup }}
            env:
            - name: MYSQL_ROOT_PASSWORD
              value: openECOMP1.0
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Skip the clone if data already exists.
              [[ -d /var/lib/mysql/mysql ]] && exit 0
              # Skip the clone on master (ordinal index 0).
              [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              echo ${BASH_REMATCH}
              [[ $ordinal -eq 0 ]] && exit 0
              # Clone data from previous peer.
              ncat --recv-only sdnc-dbhost-$(($ordinal-1)).dbhost.{{ .Values.nsPrefix }}-sdnc 3307 | xbstream -x -C /var/lib/mysql
              # Prepare the backup.
              xtrabackup --user=root --password=$MYSQL_ROOT_PASSWORD --prepare --target-dir=/var/lib/mysql
              ls -l /var/lib/mysql
            volumeMounts:
            - name: sdnc-data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
          containers:
          ...
  • Adjusted container sdnc-db-container definition with additional volumeMounts, resources and liveneseProbe definitions:

    Code Block
    languagexml
    titleNew definition of container sdnc-db-container
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncDbhost }}
    ...
    metadata:
      name: sdnc-dbhost
      ...
    spec:
      ...
      template:
        ...
        spec:
          containers:
          - env:
            - name: MYSQL_ALLOW_EMPTY_PASSWORD
              value: "0"
            ...
            name: sdnc-db-container
            volumeMounts:
            - mountPath: /var/lib/mysql
              name: sdnc-data
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            ...
            resources:
              requests:
                cpu: 500m
                memory: 1Gi
            livenessProbe:
              exec:
                command: ["mysqladmin", "ping"]
              initialDelaySeconds: 30
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              ...
    Code Block
    languagexml
    titleOld definition of container sdnc-db-container
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncDbhost }}
    ...
    metadata:
      name: sdnc-dbhost
      ...
    spec:
      ...
      template:
        ...
        spec:
          containers:
          - env:
            ...
            name: sdnc-db-container
            volumeMounts:
            - mountPath: /etc/localtime
              name: localtime
              readOnly: true
            - mountPath: /var/lib/mysql
              name: sdnc-data
            ...
            readinessProbe:
              ...
  • Added new container xtrabackup for DB data clone

    Code Block
    languagexml
    titleAdded new container: xtrabackup
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncDbhost }}
    ...
    metadata:
      name: sdnc-dbhost
      ...
    spec:
      ...
      template:
        ...
        spec:
          containers:
          ...
          - name: xtrabackup
            image: {{ .Values.image.xtrabackup }}
            env:
            - name: MYSQL_ROOT_PASSWORD
              value: openECOMP1.0
            ports:
            - name: xtrabackup
              containerPort: 3307
            command:
            - bash
            - "-c"
            - |
              set -ex
              cd /var/lib/mysql
              ls -l
              # Determine binlog position of cloned data, if any.
              if [[ -f xtrabackup_slave_info ]]; then
                echo "Inside xtrabackup_slave_info"
                # XtraBackup already generated a partial "CHANGE MASTER TO" query
                # because we're cloning from an existing slave.
                mv xtrabackup_slave_info change_master_to.sql.in
                # Ignore xtrabackup_binlog_info in this case (it's useless).
                rm -f xtrabackup_binlog_info
              elif [[ -f xtrabackup_binlog_info ]]; then
                echo "Inside xtrabackup_binlog_info"
                # We're cloning directly from master. Parse binlog position.
                [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
                rm xtrabackup_binlog_info
                echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                      MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
              fi
    
              # Check if we need to complete a clone by starting replication.
              if [[ -f change_master_to.sql.in ]]; then
                echo "Waiting for mysqld to be ready (accepting connections)"
                [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
                ordinal=${BASH_REMATCH[1]}
                echo $ordinal
                until mysql --user=root --password=$MYSQL_ROOT_PASSWORD -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
    
                echo "Initializing replication from clone position"
                # In case of container restart, attempt this at-most-once.
                mv change_master_to.sql.in change_master_to.sql.orig
                mysql --user=root --password=$MYSQL_ROOT_PASSWORD -h 127.0.0.1 <<EOF
              $(<change_master_to.sql.orig),
                MASTER_HOST="sdnc-dbhost-0.dbhost.{{ .Values.nsPrefix }}-sdnc",
                MASTER_USER="root",
                MASTER_PASSWORD="$MYSQL_ROOT_PASSWORD",
                MASTER_CONNECT_RETRY=10;
              START SLAVE;
              EOF
              fi
    
              # Start a server to send backups when requested by peers.
              exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
                "xtrabackup --user=root --password=$MYSQL_ROOT_PASSWORD --backup --slave-info --stream=xbstream --host=127.0.0.1"
            ...
  • Changed PVC sdnc-data to volumeClaimTemplates for dynamic allocating resources (in addition, removed imagePullSecrets)

    Code Block
    languagexml
    titleVolumeClaimTemplates definition of sdnc-data
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncDbhost }}
    ...
    metadata:
      name: sdnc-dbhost
      ...
    spec:
      ...
      template:
        metadata:
          labels:
            app: sdnc-dbhost
          name: sdnc-dbhost
        spec:
          containers:
          ...
          volumes:
          ...
          ...
      volumeClaimTemplates:
      - metadata:
         name: sdnc-data
         annotations:
           volume.beta.kubernetes.io/storage-class: "{{ .Values.nsPrefix }}-sdnc-data"
        spec:
          accessModes: ["ReadWriteMany"]
          resources:
            requests:
              storage: 1Gi
    #{{ end }}
    Code Block
    languagexml
    titleOld definition of sdnc-data
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncDbhost }}
    ...
    metadata:
      name: sdnc-dbhost
      ...
    spec:
      ...
      template:
        metadata:
          labels:
            app: sdnc-dbhost
          name: sdnc-dbhost
        spec:
          containers:
          ...
          volumes:
          ...
          - name: sdnc-data
            persistentVolumeClaim:
              claimName: sdnc-db
          imagePullSecrets:
          - name: "{{ .Values.nsPrefix }}-docker-registry-key"
    
    #{{ end }}

dgbuilder-deployment.yaml

  • Changed Kubernetes 1.5 init-container beta syntax to Kubernetes 1.6 initConainers field syntax (due to Kubernetes support compatibility)

    Code Block
    languagexml
    titleNew definition of init container
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncDgbuilder }}
    ...
    metadata:
      name: sdnc-dgbuilder
      ...
    spec:
      ...
      template:
        ...
        spec:
          initContainers:
          - command:
            - /root/ready.py
            - "--container-name"
            - "sdnc-db-container"
            - "--container-name"
            - "sdnc-controller-container"
            env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            image: "{{ .Values.image.readiness }}"
            imagePullPolicy: {{ .Values.pullPolicy }}
            name: sdnc-dgbuilder-readiness
          containers:
          ...
    Code Block
    languagexml
    titleOld definition of init container
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncDgbuilder }}
    ...
    metadata:
      name: sdnc-dgbuilder
      ...
    spec:
      ...
      template:
        metadata:
          ...
          annotations:
            pod.beta.kubernetes.io/init-containers: '[
              {
                  "args": [
                      "--container-name",
                      "sdnc-db-container",
                      "--container-name",
                      "sdnc-controller-container"
                  ],
                  "command": [
                      "/root/ready.py"
                  ],
                  "env": [
                      {
                          "name": "NAMESPACE",
                          "valueFrom": {
                              "fieldRef": {
                                  "apiVersion": "v1",
                                  "fieldPath": "metadata.namespace"
                              }
                          }
                      }
                  ],
                  "image": "{{ .Values.image.readiness }}",
                  "imagePullPolicy": "{{ .Values.pullPolicy }}",
                  "name": "sdnc-dgbuilder-readiness"
              }
              ]'
        spec:
        ...

mysql-configmap.yaml


  • new file added for DB cluster

    Code Block
    languagexml
    titleNew file mysql-configmap.yaml
    linenumberstrue
    collapsetrue
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: mysql
      namespace: "{{ .Values.nsPrefix }}-sdnc"
      labels:
        app: mysql
    data:
      master.cnf: |
        # Apply this config only on the master.
        [mysqld]
        log-bin
        [localpathprefix]
        master
      slave.cnf: |
        # Apply this config only on slaves.
        [mysqld]
        super-read-only
        [localpathprefix]
        slave


nfs-provisoner-deployment.yaml


  • new file added for DB cluster

    Code Block
    languagexml
    titleNew file nfs-provisioner-deployment.yaml
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncDbhost }}
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: nfs-provisioner
      namespace: "{{ .Values.nsPrefix }}-sdnc"
    spec:
      replicas: 1
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: nfs-provisioner
        spec:
          nodeSelector:
            disktype: ssd
          containers:
            - name: nfs-provisioner
              image: quay.io/kubernetes_incubator/nfs-provisioner:v1.0.8
              ports:
                - name: nfs
                  containerPort: 2049
                - name: mountd
                  containerPort: 20048
                - name: rpcbind
                  containerPort: 111
                - name: rpcbind-udp
                  containerPort: 111
                  protocol: UDP
              securityContext:
                capabilities:
                  add:
                    - DAC_READ_SEARCH
                    - SYS_RESOURCE
              args:
                - "-provisioner=sdnc/nfs"
              env:
                - name: POD_IP
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
                - name: SERVICE_NAME
                  value: nfs-provisioner
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
              imagePullPolicy: "IfNotPresent"
              volumeMounts:
                - name: export-volume
                  mountPath: /export
          volumes:
            - name: export-volume
              hostPath:
                path: /dockerdata-nfs/{{ .Values.nsPrefix }}/sdnc/datai
    #{{ end }}


sdnc-data-storageclass.yaml 


  • removed sdnc-pv-pvc.yaml file and added new this for DB cluster

    Code Block
    languagexml
    titleNew file sdnc-data-storageclass.yaml
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncDbhost }}
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: "{{ .Values.nsPrefix }}-sdnc-data"
      namespace: "{{ .Values.nsPrefix }}-sdnc"
    provisioner: sdnc/nfs
    #{{ end }}


sdnc-statefulset.yaml


  • renamed from sdnc-deployment.yaml file
  • Set replication to 3 with parallel pod policy

    FieldNew valueOld value
    .apiVersion

    apps/v1beta1 1

    extensions/v1beta1

    .kindStatefulSetDeployment
    .spec.serviceName"sdnhostcluster" 2N/A
    .spec.replicas3N/A
    .spec.podManagementPolicy"Parallel" 3N/A



  • Exported 2 new ports for SDNC cluster (2550) and Jolokia (8080)

    FieldNew value
    .spec.containers.ports

    - containerPort: 2550
    - containerPort: 8080



  • Mounted directory for modified startODL.sh script and test bundle

    FieldNew value
    .spec.containers.volumeMounts 4.a 

    - mountPath: /opt/onap/sdnc/bin/startODL.sh
    name: sdnc-startodl
    - mountPath: /opt/opendaylight/current/deploy
    name: sdnc-deploy

     .spec.volumes 4.b 

    - name: sdnc-deploy
    hostPath:
    path: /home/ubuntu/cluster/deploy
    - name: sdnc-startodl
    hostPath:
    path: /home/ubuntu/cluster/script/startODL.sh

  • Changed Kubernetes 1.5 init-container beta syntax to Kubernetes 1.6 initConainers field syntax (due to Kubernetes support compatibility)

    Code Block
    languagexml
    titleNew definition of init container
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdnc }}
    ...
    metadata:
      name: sdnc
      ...
    spec:
      ...
      template:
        metadata:
          ...
        spec:
          initContainers:
          - command:
            - /root/ready.py
            - "--container-name"
            - "sdnc-db-container"
            env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            image: "{{ .Values.image.readiness }}"
            imagePullPolicy: {{ .Values.pullPolicy }}
            name: sdnc-readiness
          containers:
          ...
    Code Block
    languagexml
    titleOld definition of init container
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdnc }}
    ...
    metadata:
      name: sdnc
      ...
    spec:
      ...
      template:
        metadata:
          ...
          annotations:
            pod.beta.kubernetes.io/init-containers: '[
              {
                  "args": [
                      "--container-name",
                      "sdnc-db-container"
                  ],
                  "command": [
                      "/root/ready.py"
                  ],
                  "env": [
                      {
                          "name": "NAMESPACE",
                          "valueFrom": {
                              "fieldRef": {
                                  "apiVersion": "v1",
                                  "fieldPath": "metadata.namespace"
                              }
                          }
                      }
                  ],
                  "image": "{{ .Values.image.readiness }}",
                  "imagePullPolicy": "{{ .Values.pullPolicy }}",
                  "name": "sdnc-readiness"
              }
              ]'
        spec:
          ...


web-deployment.yaml


  • Changed Kubernetes 1.5 init-container beta syntax to Kubernetes 1.6 initConainers field syntax (due to Kubernetes support compatibility)


    Code Block
    languagexml
    titleNew definition of init container
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncPortal }}
    ...
    metadata:
      name: sdnc-portal
      ...
    spec:
      ...
      template:
        ...
        spec:
          initContainers:
          - command:
            - /root/ready.py
            - "--container-name"
            - "sdnc-db-container"
            - "--container-name"
            - "sdnc-controller-container"
            env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            image: "{{ .Values.image.readiness }}"
            imagePullPolicy: {{ .Values.pullPolicy }}
            name: sdnc-dgbuilder-readiness
          containers:
          ...
    Code Block
    languagexml
    titleOld definition of init container
    linenumberstrue
    collapsetrue
    #{{ if not .Values.disableSdncSdncPortal }}
    ...
    metadata:
      name: sdnc-portal
      ...
    spec:
      ...
      template:
        metadata:
          ...
          annotations:
            pod.beta.kubernetes.io/init-containers: '[
              {
                  "args": [
                      "--container-name",
                      "sdnc-db-container",
                      "--container-name",
                      "sdnc-controller-container"
                  ],
                  "command": [
                      "/root/ready.py"
                  ],
                  "env": [
                      {
                          "name": "NAMESPACE",
                          "valueFrom": {
                              "fieldRef": {
                                  "apiVersion": "v1",
                                  "fieldPath": "metadata.namespace"
                              }
                          }
                      }
                  ],
                  "image": "{{ .Values.image.readiness }}",
                  "imagePullPolicy": "{{ .Values.pullPolicy }}",
                  "name": "sdnc-portal-readiness"
              }
              ]'
        spec:
        ...


Notes:

  1. Use .apiVersion "apps/v1beta1" for the Kubernetes version before 1.8.0; otherwise, use .apiVersion "apps/v1beta2"
    • Check the Kubernetes version using the command "kubectl version"

      Expand
      titleExample of kubernetes version 1.7.7

      Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
      Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.7-rancher1", GitCommit:"a1ea37c6f6d21f315a07631b17b9537881e1986a", GitTreeState:"clean", BuildDate:"2017-10-02T21:33:08Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

      Expand
      titleExample of kubernetes version 1.8.3

      Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

      Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.3-rancher1", GitCommit:"beb8311a9f114ba92558d8d771a81b7fb38422ae", GitTreeState:"clean", BuildDate:"2017-11-14T00:54:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

  2. The value must align with the associated service name in the all_services.yaml file under the same directory.
  3. By default, .spec.podManagementPolicy has the value "OrderReady".
    • With the value "OrderReady", the Kubernetes pod management tells the StatefulSet controller to respect the ordering guarantees, waiting for a Pod to become Running and Ready or completely terminate and then launching or terminating another Pod.
    • With the value "Parallel", the Kubernetes pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, not waiting for Pods to become Running and Ready or completely terminated prior to launching or terminating another Pod.
  4. Since StartODL.sh has to be changed to enable clusters to function, two paths must be mounted:
    1. mount /home/ubuntu/cluster/script/startODL.sh (local) to replace /opt/onap/sdnc/bin/startODL.sh (docker), so that we can use our local updated script with cluster config.
    2. mount /home/ubuntu/cluster/deploy (local) to /opt/opendaylight/current/deploy (docker), so that we can dynamically deploy test bundles outside pods.

Modify Script startODL.sh

This is manual for now, It can be automated when we automate the SDN-C cluster deployment.

vi /home/ubuntu/cluster/script/startODL.sh

...

As the /dockerdata-nfs directory is a nfs mounted directory, we need to force nfs-provisioner pod to run on the Kubernetes node where you have configured nfs server by the following steps:

#PurposeCommand and Example
1find the node name
Expand
titlefind nfs server node

Run command "ps -ef|grep nfs", you should

  • node with nfs server runs nfsd process:

ubuntu@sdnc-k8s:~$ ps -ef|grep nfs
root 3473 2 0 Dec07 ? 00:00:00 [nfsiod]
root 11072 2 0 Dec06 ? 00:00:00 [nfsd4_callbacks]
root 11074 2 0 Dec06 ? 00:00:00 [nfsd]
root 11075 2 0 Dec06 ? 00:00:00 [nfsd]
root 11076 2 0 Dec06 ? 00:00:00 [nfsd]
root 11077 2 0 Dec06 ? 00:00:00 [nfsd]
root 11078 2 0 Dec06 ? 00:00:00 [nfsd]
root 11079 2 0 Dec06 ? 00:00:03 [nfsd]
root 11080 2 0 Dec06 ? 00:00:13 [nfsd]
root 11081 2 0 Dec06 ? 00:00:42 [nfsd]
ubuntu@sdnc-k8s:~$

  • node with nfs client runs nfs svc process:

ubuntu@sdnc-k8s-2:~$ ps -ef|grep nfs
ubuntu 5911 5890 0 20:10 pts/0 00:00:00 grep --color=auto nfs
root 18739 2 0 Dec06 ? 00:00:00 [nfsiod]
root 18749 2 0 Dec06 ? 00:00:00 [nfsv4.0-svc]
ubuntu@sdnc-k8s-2:~$

kubectl get node

Expand
titleExample of response

ubuntu@sdnc-k8s:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
sdnc-k8s Ready master 6d v1.8.4
sdnc-k8s-2 Ready <none> 6d v1.8.4
ubuntu@sdnc-k8s:~$

2set label on the node

kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd

Expand
titleAn example

ubuntu@sdnc-k8s:~$ kubectl label nodes sdnc-k8s disktype=ssd

node "sdnc-k8s" labeled

3check the label has been set on the node

kubectl get node --show-labels

Expand
titleAn example

ubuntu@sdnc-k8s:~$ kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
sdnc-k8s Ready master 6d v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=sdnc-k8s,node-role.kubernetes.io/master=
sdnc-k8s-2 Ready <none> 6d v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=sdnc-k8s-2
ubuntu@sdnc-k8s:~$

4update nfs-provisioner pod template to force it running on the nfs server node

In sdnc-pv-pv.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner

Expand
titleAn example of the nfs-provisioner pod with nodeSelector

Image Modified


Option 2: Make nfs-provisioner To Use A Not-Mounted Directory

...

Make change to nfs-provisioner-deployment.yaml file as the following:

FieldChange To ValueFrom Value
.spec.template.spec.volumes.hostPath.path

replace "dockerdata-nfs" with "nfs-provisioner", so the value become:

/nfs-provisioner/{{ .Values.nsPrefix }}/sdnc/data

/dockerdata-nfs/{{ .Values.nsPrefix }}/sdnc/data