The page details the changes and reasons in gerrit topic:SDNC-163 for SDN-C cluster (described in About SDN-C Clustering).
Modify Helm Values Definition
Helm values is defined in {$OOM}/kubernetes/sdnc/values.yaml file, the following new additions have been added to this file:
Field | Value | Purpose |
---|
image.mysql | mysql:5.7 | define the image version of mysql |
enableODLCluster | true | to provide configuration clustering deployment, along with other value "false". |
numberOfODLReplicas | 3 | to provide configuration clustering deployment with configurable replicas number for sdnc pod |
numberOfDbReplicas | 2 | to provide configuration clustering deployment with configurable replicas number for dbhost pod |
Modify Kubernetes Templates
We are using Kubernetes replicas to achieve the SDN-C cluster deployment. The SDN-C components (pod, deployment and service) deployments are defined in templates under directory {$OOM}/kubernetes/sdnc/templates.
File name | Changes |
---|
all-services.yaml | For ODL cluster: For DB cluster: |
dg-statefulset.yaml | - Renamed from db-deployment.yaml file
Changed pod kind from "Deployment" to "StatefulSet" and set replication to 2 Field | New value | Old value |
---|
.apiVersion | apps/v1beta1 1 | extensions/v1beta1 | .kind | StatefulSet | Deployment | .spec.serviceName | "dbhost" 2 | N/A | .spec.replicas | 2 | N/A |
Added initContainers (init-mysql and clone-mysql) to control cluster settings
#{{ if not .Values.disableSdncSdncDbhost }}
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: sdnc-dbhost
...
spec:
serviceName: "dbhost"
replicas: {{ .Values.numberOfDbReplicas }}
...
template:
...
spec:
initContainers:
- name: init-mysql
image: {{ .Values.image.mysql }}
imagePullPolicy: {{ .Values.pullPolicy }}
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo BASH_REMATCH=${BASH_REMATCH}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: {{ .Values.image.xtrabackup }}
env:
- name: MYSQL_ROOT_PASSWORD
value: openECOMP1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo ${BASH_REMATCH}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only sdnc-dbhost-$(($ordinal-1)).dbhost.{{ .Values.nsPrefix }}-sdnc 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --user=root --password=$MYSQL_ROOT_PASSWORD --prepare --target-dir=/var/lib/mysql
ls -l /var/lib/mysql
volumeMounts:
- name: sdnc-data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
...
Adjusted container sdnc-db-container definition with additional volumeMounts, resources and liveneseProbe definitions:
#{{ if not .Values.disableSdncSdncDbhost }}
...
metadata:
name: sdnc-dbhost
...
spec:
...
template:
...
spec:
containers:
- env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "0"
...
name: sdnc-db-container
volumeMounts:
- mountPath: /var/lib/mysql
name: sdnc-data
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
...
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
...
|
#{{ if not .Values.disableSdncSdncDbhost }}
...
metadata:
name: sdnc-dbhost
...
spec:
...
template:
...
spec:
containers:
- env:
...
name: sdnc-db-container
volumeMounts:
- mountPath: /etc/localtime
name: localtime
readOnly: true
- mountPath: /var/lib/mysql
name: sdnc-data
...
readinessProbe:
...
|
Added new container xtrabackup for DB data clone
#{{ if not .Values.disableSdncSdncDbhost }}
...
metadata:
name: sdnc-dbhost
...
spec:
...
template:
...
spec:
containers:
...
- name: xtrabackup
image: {{ .Values.image.xtrabackup }}
env:
- name: MYSQL_ROOT_PASSWORD
value: openECOMP1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
ls -l
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info ]]; then
echo "Inside xtrabackup_slave_info"
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave.
mv xtrabackup_slave_info change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
echo "Inside xtrabackup_binlog_info"
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo $ordinal
until mysql --user=root --password=$MYSQL_ROOT_PASSWORD -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
mysql --user=root --password=$MYSQL_ROOT_PASSWORD -h 127.0.0.1 <<EOF
$(<change_master_to.sql.orig),
MASTER_HOST="sdnc-dbhost-0.dbhost.{{ .Values.nsPrefix }}-sdnc",
MASTER_USER="root",
MASTER_PASSWORD="$MYSQL_ROOT_PASSWORD",
MASTER_CONNECT_RETRY=10;
START SLAVE;
EOF
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --user=root --password=$MYSQL_ROOT_PASSWORD --backup --slave-info --stream=xbstream --host=127.0.0.1"
...
Changed PVC sdnc-data to volumeClaimTemplates for dynamic allocating resources (in addition, removed imagePullSecrets)
#{{ if not .Values.disableSdncSdncDbhost }}
...
metadata:
name: sdnc-dbhost
...
spec:
...
template:
metadata:
labels:
app: sdnc-dbhost
name: sdnc-dbhost
spec:
containers:
...
volumes:
...
...
volumeClaimTemplates:
- metadata:
name: sdnc-data
annotations:
volume.beta.kubernetes.io/storage-class: "{{ .Values.nsPrefix }}-sdnc-data"
spec:
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 1Gi
#{{ end }}
|
#{{ if not .Values.disableSdncSdncDbhost }}
...
metadata:
name: sdnc-dbhost
...
spec:
...
template:
metadata:
labels:
app: sdnc-dbhost
name: sdnc-dbhost
spec:
containers:
...
volumes:
...
- name: sdnc-data
persistentVolumeClaim:
claimName: sdnc-db
imagePullSecrets:
- name: "{{ .Values.nsPrefix }}-docker-registry-key"
#{{ end }}
|
|
dgbuilder-deployment.yaml | |
mysql-configmap.yaml | |
nfs-provisoner-deployment.yaml | |
sdnc-data-storageclass.yaml | |
sdnc-statefulset.yaml |
Mounted directory for modified startODL.sh script and test bundle Field | New value |
---|
.spec.containers.volumeMounts 4.a | - mountPath: /opt/onap/sdnc/bin/startODL.sh name: sdnc-startodl - mountPath: /opt/opendaylight/current/deploy name: sdnc-deploy | .spec.volumes 4.b | - name: sdnc-deploy hostPath: path: /home/ubuntu/cluster/deploy - name: sdnc-startodl hostPath: path: /home/ubuntu/cluster/script/startODL.sh |
Changed Kubernetes 1.5 init-container beta syntax to Kubernetes 1.6 initConainers field syntax (due to Kubernetes support compatibility)
#{{ if not .Values.disableSdncSdnc }}
...
metadata:
name: sdnc
...
spec:
...
template:
metadata:
...
spec:
initContainers:
- command:
- /root/ready.py
- "--container-name"
- "sdnc-db-container"
env:
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: "{{ .Values.image.readiness }}"
imagePullPolicy: {{ .Values.pullPolicy }}
name: sdnc-readiness
containers:
...
|
#{{ if not .Values.disableSdncSdnc }}
...
metadata:
name: sdnc
...
spec:
...
template:
metadata:
...
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"args": [
"--container-name",
"sdnc-db-container"
],
"command": [
"/root/ready.py"
],
"env": [
{
"name": "NAMESPACE",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
}
],
"image": "{{ .Values.image.readiness }}",
"imagePullPolicy": "{{ .Values.pullPolicy }}",
"name": "sdnc-readiness"
}
]'
spec:
...
|
|
web-deployment.yaml |
- Changed Kubernetes 1.5 init-container beta syntax to Kubernetes 1.6 initConainers field syntax (due to Kubernetes support compatibility)
#{{ if not .Values.disableSdncSdncPortal }}
...
metadata:
name: sdnc-portal
...
spec:
...
template:
...
spec:
initContainers:
- command:
- /root/ready.py
- "--container-name"
- "sdnc-db-container"
- "--container-name"
- "sdnc-controller-container"
env:
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: "{{ .Values.image.readiness }}"
imagePullPolicy: {{ .Values.pullPolicy }}
name: sdnc-dgbuilder-readiness
containers:
...
|
#{{ if not .Values.disableSdncSdncPortal }}
...
metadata:
name: sdnc-portal
...
spec:
...
template:
metadata:
...
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"args": [
"--container-name",
"sdnc-db-container",
"--container-name",
"sdnc-controller-container"
],
"command": [
"/root/ready.py"
],
"env": [
{
"name": "NAMESPACE",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
}
],
"image": "{{ .Values.image.readiness }}",
"imagePullPolicy": "{{ .Values.pullPolicy }}",
"name": "sdnc-portal-readiness"
}
]'
spec:
...
|
|
Notes:
- Use .apiVersion "apps/v1beta1" for the Kubernetes version before 1.8.0; otherwise, use .apiVersion "apps/v1beta2"
- The value must align with the associated service name in the all_services.yaml file under the same directory.
- By default, .spec.podManagementPolicy has the value "OrderReady".
- With the value "OrderReady", the Kubernetes pod management tells the StatefulSet controller to respect the ordering guarantees, waiting for a Pod to become Running and Ready or completely terminate and then launching or terminating another Pod.
- With the value "Parallel", the Kubernetes pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, not waiting for Pods to become Running and Ready or completely terminated prior to launching or terminating another Pod.
- Since StartODL.sh has to be changed to enable clusters to function, two paths must be mounted:
- mount /home/ubuntu/cluster/script/startODL.sh (local) to replace /opt/onap/sdnc/bin/startODL.sh (docker), so that we can use our local updated script with cluster config.
- mount /home/ubuntu/cluster/deploy (local) to /opt/opendaylight/current/deploy (docker), so that we can dynamically deploy test bundles outside pods.
Modify Script startODL.sh
This is manual for now, It can be automated when we automate the SDN-C cluster deployment.
vi /home/ubuntu/cluster/script/startODL.sh
#!/bin/bash
###
# ============LICENSE_START=======================================================
# openECOMP : SDN-C
# ================================================================================
# Copyright (C) 2017 AT&T Intellectual Property. All rights
# reserved.
# ================================================================================
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============LICENSE_END=========================================================
###
function enable_odl_cluster(){
echo "Installing Opendaylight cluster features"
${ODL_HOME}/bin/client -u karaf feature:install odl-mdsal-clustering
${ODL_HOME}/bin/client -u karaf feature:install odl-jolokia
echo "Update cluster information statically"
hm=$(hostname)
echo "Get current Hostname ${hm}"
#TODO Do naming check
node=($(echo ${hm} | tr '-' '\n'))
node_name=${node[0]}
node_index=${node[1]}
#TODO for dynamic clustering, have to use rest call to Master server
#for getting the real replication number
#sdnhostcluster should be the same as headless service
node_list="${node_name}-0.sdnhostcluster.onap-sdnc.svc.cluster.local";
for ((i=1;i<=2;i++));
do
node_list="${node_list} ${node_name}-$i.sdnhostcluster.onap-sdnc.svc.cluster.local"
done
/opt/opendaylight/current/bin/configure_cluster.sh $((node_index+1)) ${node_list}
}
# Install SDN-C platform components if not already installed and start container
ODL_HOME=${ODL_HOME:-/opt/opendaylight/current}
ODL_ADMIN_PASSWORD=${ODL_ADMIN_PASSWORD:-Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U}
SDNC_HOME=${SDNC_HOME:-/opt/onap/sdnc}
SLEEP_TIME=${SLEEP_TIME:-120}
MYSQL_PASSWD=${MYSQL_PASSWD:-openECOMP1.0}
#
# Wait for database
#
echo "Waiting for mysql"
until mysql -h dbhost -u root -p${MYSQL_PASSWD} mysql &> /dev/null
do
printf "."
sleep 1
done
echo -e "\nmysql ready"
if [ ! -f ${SDNC_HOME}/.installed ]
then
echo "Installing SDN-C database"
${SDNC_HOME}/bin/installSdncDb.sh
echo "Starting OpenDaylight"
${ODL_HOME}/bin/start
echo "Waiting ${SLEEP_TIME} seconds for OpenDaylight to initialize"
sleep ${SLEEP_TIME}
echo "Installing SDN-C platform features"
${SDNC_HOME}/bin/installFeatures.sh
if [ -x ${SDNC_HOME}/svclogic/bin/install.sh ]
then
echo "Installing directed graphs"
${SDNC_HOME}/svclogic/bin/install.sh
fi
enable_odl_cluster
echo "Restarting OpenDaylight"
${ODL_HOME}/bin/stop
echo "Installed at `date`" > ${SDNC_HOME}/.installed
fi
exec ${ODL_HOME}/bin/karaf
chmod 755 /home/ubuntu/cluster/script/startODL.sh
Run SDN-C Cluster With nfs-provisioner POD
Option 1: Make nfs-provisioner Pod Runs On Node Where NFS Server Runs
In this option, the sdnc-dbhost pods use the pvc directory assigned to it by nfs-provisioner under /dockerdata-nfs.
An example of the sdnc-dbhost pvc directory
As the /dockerdata-nfs directory is a nfs mounted directory, we need to force nfs-provisioner pod to run on the Kubernetes node where you have configured nfs server by the following steps:
# | Purpose | Command and Example |
---|
1 | find the node name | find nfs server node Run command "ps -ef|grep nfs", you should - node with nfs server runs nfsd process:
ubuntu@sdnc-k8s:~$ ps -ef|grep nfs root 3473 2 0 Dec07 ? 00:00:00 [nfsiod] root 11072 2 0 Dec06 ? 00:00:00 [nfsd4_callbacks] root 11074 2 0 Dec06 ? 00:00:00 [nfsd] root 11075 2 0 Dec06 ? 00:00:00 [nfsd] root 11076 2 0 Dec06 ? 00:00:00 [nfsd] root 11077 2 0 Dec06 ? 00:00:00 [nfsd] root 11078 2 0 Dec06 ? 00:00:00 [nfsd] root 11079 2 0 Dec06 ? 00:00:03 [nfsd] root 11080 2 0 Dec06 ? 00:00:13 [nfsd] root 11081 2 0 Dec06 ? 00:00:42 [nfsd] ubuntu@sdnc-k8s:~$
- node with nfs client runs nfs svc process:
ubuntu@sdnc-k8s-2:~$ ps -ef|grep nfs ubuntu 5911 5890 0 20:10 pts/0 00:00:00 grep --color=auto nfs root 18739 2 0 Dec06 ? 00:00:00 [nfsiod] root 18749 2 0 Dec06 ? 00:00:00 [nfsv4.0-svc] ubuntu@sdnc-k8s-2:~$
kubectl get node Example of response ubuntu@sdnc-k8s:~$ kubectl get node NAME STATUS ROLES AGE VERSION sdnc-k8s Ready master 6d v1.8.4 sdnc-k8s-2 Ready <none> 6d v1.8.4 ubuntu@sdnc-k8s:~$ |
2 | set label on the node | kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd An example ubuntu@sdnc-k8s:~$ kubectl label nodes sdnc-k8s disktype=ssd node "sdnc-k8s" labeled |
3 | check the label has been set on the node | kubectl get node --show-labels |
4 | update nfs-provisioner pod template to force it running on the nfs server node | In sdnc-pv-pv.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner” An example of the nfs-provisioner pod with nodeSelector |
Option 2: Make nfs-provisioner To Use A Not-Mounted Directory
In this option, the sdnc-dbhost pods used pvc directory can only be found on the node where the nfs-provisioner pod is running on. (This can be found using command "kubectl get pod -n onap-sdnc -o wide |grep nfs").
Make change to nfs-provisioner-deployment.yaml file as the following:
Field | Change To Value | From Value |
---|
.spec.template.spec.volumes.hostPath.path | replace "dockerdata-nfs" with "nfs-provisioner", so the value become: /nfs-provisioner/{{ .Values.nsPrefix }}/sdnc/data
| /dockerdata-nfs/{{ .Values.nsPrefix }}/sdnc/data |