This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying APPC within that Kubernetes cluster.
...
Code Block |
---|
|
# If you installed coredns addon
sudo kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system coredns-65dcdb4cf-8dr7w 0/1 Pending 0 10m <none> <none>
kube-system etcd-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master
kube-system kube-apiserver-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master
kube-system kube-controller-manager-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master
kube-system kube-proxy-jztl4 1/1 Running 0 10m 10.147.99.149 k8s-master
kube-system kube-scheduler-k8s-master 1/1 Running 0 9m 10.147.99.149 k8s-master
#(There will be 2 codedns pods with kubernetes version 1.10.1)
# If you did not install coredns addon; kube-dns pod will be created
sudo kubectl get pods --all-namespaces -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master
kube-apiserver-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master
kube-controller-manager-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master
kube-dns-6f4fd4bdf-czn68 3/3 Pending 0 23d <none> <none>
kube-proxy-ljt2h 1/1 Running 0 23d 10.147.99.148 k8s-s1-node0
kube-scheduler-k8s-s1-master 1/1 Running 0 23d 10.147.99.131 k8s-s1-master
# (Optional) run the following commands if you are curious.
sudo kubectl get node
sudo kubectl get secret
sudo kubectl config view
sudo kubectl config current-context
sudo kubectl get componentstatus
sudo kubectl get clusterrolebinding --all-namespaces
sudo kubectl get serviceaccounts --all-namespaces
sudo kubectl get pods --all-namespaces -o wide
sudo kubectl get services --all-namespaces -o wide
sudo kubectl cluster-info |
...
Note |
---|
You may use any specific known stable OOM release for APPC deployment. The above URL downloads latest OOM. |
Customize the oom/kubernetes/onap parent chart, like the values.yaml file, to suit your deployment. You may want to selectively enable or disable ONAP components by changing the subchart **enabled** flags to *true* or *false*.
Code Block |
---|
$ vi oom/kubernetes/onap/values.yaml
Example:
...
robot: # Robot Health Check
enabled: true
sdc:
enabled: false
appc:
enabled: true
so: # Service Orchestrator
enabled: false |
...
Code Block |
---|
ubuntu@k8s-master:~/oom/kubernetes$ kubectl get pods --all-namespaces -o wide -w
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system etcd-k8s-master 1/1 Running 5 14d 10.12.5.171 k8s-master
kube-system kube-apiserver-k8s-master 1/1 Running 5 14d 10.12.5.171 k8s-master
kube-system kube-controller-manager-k8s-master 1/1 Running 5 14d 10.12.5.171 k8s-master
kube-system kube-dns-86f4d74b45-px44s 3/3 Running 21 27d 10.32.0.5 k8s-master
kube-system kube-proxy-25tm5 1/1 Running 8 27d 10.12.5.171 k8s-master
kube-system kube-proxy-6dt4z 1/1 Running 4 27d 10.12.5.174 k8s-appc1node1
kube-system kube-proxy-jmv67 1/1 Running 4 27d 10.12.5.193 k8s-appc2node2
kube-system kube-proxy-l8fks 1/1 Running 6 27d 10.12.5.194 k8s-appc3node3
kube-system kube-scheduler-k8s-master 1/1 Running 5 14d 10.12.5.171 k8s-master
kube-system tiller-deploy-84f4c8bb78-s6bq5 1/1 Running 0 4d 10.47.0.7 k8s-appc2node2
kube-system weave-net-bz7wr 2/2 Running 20 27d 10.12.5.194 k8s-appc3node3
kube-system weave-net-c2pxd 2/2 Running 13 27d 10.12.5.174 k8s-appc1node1
kube-system weave-net-jw29c 2/2 Running 20 27d 10.12.5.171 k8s-master
kube-system weave-net-kxxpl 2/2 Running 13 27d 10.12.5.193 k8s-appc2node2
onap dev-appc-0 0/2 PodInitializing 0 2m 10.47.0.5 k8s-appc2node2
onap dev-appc-1 0/2 PodInitializing 0 2m 10.36.0.8 k8s-appc3node3
onap dev-appc-2 0/2 PodInitializing 0 2m 10.44.0.7 k8s-appc1node1
onap dev-appc-cdt-8cbf9d4d9-mhp4b 1/1 Running 0 2m 10.47.0.1 k8s-appc2node2
onap dev-appc-db-0 2/2 Running 0 2m 10.36.0.5 k8s-appc3node3
onap dev-appc-dgbuilder-54766c5b87-xw6c6 0/1 PodInitializing 0 2m 10.44.0.2 k8s-appc1node1
onap dev-robot-785b9bfb45-9s2rs 0/1 PodInitializing 0 2m 10.36.0.7 k8s-appc3node3 |
Cleanup deployed ONAP instance
...
Code Block |
---|
|
$ kubectl -n onap get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
dev-appc-0 2/2 Running 0 26m 10.47.0.5 k8s-appc2node2
dev-appc-1 2/2 Running 0 26m 10.36.0.8 k8s-appc3node3
dev-appc-2 2/2 Running 0 26m 10.44.0.7 k8s-appc1node1
dev-appc-cdt-8cbf9d4d9-mhp4b 1/1 Running 0 26m 10.47.0.1 k8s-appc2node2
dev-appc-db-0 2/2 Running 0 26m 10.36.0.5 k8s-appc3node3
dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 Running 0 26m 10.44.0.2 k8s-appc1node1
dev-robot-785b9bfb45-9s2rs 1/1 Running 0 26m 10.36.0.7 k8s-appc3node3
|
Code Block |
---|
|
$ kubectl get services --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 27d <none>
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 27d k8s-app=kube-dns
kube-system tiller-deploy ClusterIP 10.108.155.106 <none> 44134/TCP 14d app=helm,name=tiller
onap appc NodePort 10.107.234.237 <none> 8282:30230/TCP,1830:30231/TCP 27m app=appc,release=dev
onap appc-cdt NodePort 10.107.253.179 <none> 80:30289/TCP 27m app=appc-cdt,release=dev
onap appc-cluster ClusterIP None <none> 2550/TCP 27m app=appc,release=dev
onap appc-dbhost ClusterIP None <none> 3306/TCP 27m app=appc-db,release=dev
onap appc-dbhost-read ClusterIP 10.101.117.102 <none> 3306/TCP 27m app=appc-db,release=dev
onap appc-dgbuilder NodePort 10.102.138.232 <none> 3000:30228/TCP 27m app=appc-dgbuilder,release=dev
onap appc-sdnctldb01 ClusterIP None <none> 3306/TCP 27m app=appc-db,release=dev
onap appc-sdnctldb02 ClusterIP None <none> 3306/TCP 27m app=appc-db,release=dev
onap robot NodePort 10.110.229.236 <none> 88:30209/TCP 27m app=robot,release=dev
|
...
Code Block |
---|
|
$ kubectl -n onap describe po/dev-appc-0
Name: dev-appc-0
Namespace: onap
Node: k8s-appc2node2/10.12.5.193
Start Time: Tue, 15 May 2018 11:31:47 -0400
Labels: app=appc
controller-revision-hash=dev-appc-7d976dd9b9
release=dev
statefulset.kubernetes.io/pod-name=dev-appc-0
Annotations: <none>
Status: Running
IP: 10.47.0.5
Controlled By: StatefulSet/dev-appc
Init Containers:
appc-readiness:
Container ID: docker://fdbf3011e7911b181a25c868f7d342951ced2832ed63c481253bb06447a0c04f
Image: oomk8s/readiness-check:2.0.0
Image ID: docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
Port: <none>
Command:
/root/ready.py
Args:
--container-name
appc-db
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 15 May 2018 11:32:00 -0400
Finished: Tue, 15 May 2018 11:32:16 -0400
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-v9mnv (ro)
Containers:
appc:
Container ID: docker://2b921a54a6cc19f9b7cdd3c8e7904ae3426019224d247fc31a74f92ec6f05ba0
Image: nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest
Image ID: docker-pullable://nexus3.onap.org:10001/onap/appc-image@sha256:ee8b64bd578f42169a86951cd45b1f2349192e67d38a7a350af729d1bf33069c
Ports: 8181/TCP, 1830/TCP
Command:
/opt/appc/bin/startODL.sh
State: Running
Started: Tue, 15 May 2018 11:40:13 -0400
Ready: True
Restart Count: 0
Readiness: tcp-socket :8181 delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'db-root-password' in secret 'dev-appc'> Optional: false
SDNC_CONFIG_DIR: /opt/onap/appc/data/properties
APPC_CONFIG_DIR: /opt/onap/appc/data/properties
DMAAP_TOPIC_ENV: SUCCESS
ENABLE_ODL_CLUSTER: true
APPC_REPLICAS: 3
Mounts:
/etc/localtime from localtime (ro)
/opt/onap/appc/bin/installAppcDb.sh from onap-appc-bin (rw)
/opt/onap/appc/bin/startODL.sh from onap-appc-bin (rw)
/opt/onap/appc/data/properties/aaiclient.properties from onap-appc-data-properties (rw)
/opt/onap/appc/data/properties/appc.properties from onap-appc-data-properties (rw)
/opt/onap/appc/data/properties/dblib.properties from onap-appc-data-properties (rw)
/opt/onap/appc/data/properties/svclogic.properties from onap-appc-data-properties (rw)
/opt/onap/appc/svclogic/bin/showActiveGraphs.sh from onap-appc-svclogic-bin (rw)
/opt/onap/appc/svclogic/config/svclogic.properties from onap-appc-svclogic-config (rw)
/opt/onap/ccsdk/bin/installSdncDb.sh from onap-sdnc-bin (rw)
/opt/onap/ccsdk/bin/startODL.sh from onap-sdnc-bin (rw)
/opt/onap/ccsdk/data/properties/aaiclient.properties from onap-sdnc-data-properties (rw)
/opt/onap/ccsdk/data/properties/dblib.properties from onap-sdnc-data-properties (rw)
/opt/onap/ccsdk/data/properties/svclogic.properties from onap-sdnc-data-properties (rw)
/opt/onap/ccsdk/svclogic/bin/showActiveGraphs.sh from onap-sdnc-svclogic-bin (rw)
/opt/onap/ccsdk/svclogic/config/svclogic.properties from onap-sdnc-svclogic-config (rw)
/opt/opendaylight/current/daexim from dev-appc-data (rw)
/opt/opendaylight/current/etc/org.ops4j.pax.logging.cfg from log-config (rw)
/var/log/onap from logs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-v9mnv (ro)
filebeat-onap:
Container ID: docker://b9143c9898a4a071d1d781359e190bdd297e31a2bd04223225a55ff8b1990b32
Image: docker.elastic.co/beats/filebeat:5.5.0
Image ID: docker-pullable://docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942
Port: <none>
State: Running
Started: Tue, 15 May 2018 11:40:14 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/filebeat/data from data-filebeat (rw)
/usr/share/filebeat/filebeat.yml from filebeat-conf (rw)
/var/log/onap from logs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-v9mnv (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
dev-appc-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: dev-appc-data-dev-appc-0
ReadOnly: false
localtime:
Type: HostPath (bare host directory volume)
Path: /etc/localtime
filebeat-conf:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-appc-filebeat
Optional: false
log-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-appc-logging-cfg
Optional: false
logs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
data-filebeat:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
onap-appc-data-properties:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-appc-onap-appc-data-properties
Optional: false
onap-appc-svclogic-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-appc-onap-appc-svclogic-config
Optional: false
onap-appc-svclogic-bin:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-appc-onap-appc-svclogic-bin
Optional: false
onap-appc-bin:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-appc-onap-appc-bin
Optional: false
onap-sdnc-data-properties:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-appc-onap-sdnc-data-properties
Optional: false
onap-sdnc-svclogic-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-appc-onap-sdnc-svclogic-config
Optional: false
onap-sdnc-svclogic-bin:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-appc-onap-sdnc-svclogic-bin
Optional: false
onap-sdnc-bin:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-appc-onap-sdnc-bin
Optional: false
default-token-v9mnv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-v9mnv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 29m (x2 over 29m) default-scheduler pod has unbound PersistentVolumeClaims (repeated 3 times)
Normal Scheduled 29m default-scheduler Successfully assigned dev-appc-0 to k8s-appc2node2
Normal SuccessfulMountVolume 29m kubelet, k8s-appc2node2 MountVolume.SetUp succeeded for volume "data-filebeat"
Normal SuccessfulMountVolume 29m kubelet, k8s-appc2node2 MountVolume.SetUp succeeded for volume "localtime"
Normal SuccessfulMountVolume 29m kubelet, k8s-appc2node2 MountVolume.SetUp succeeded for volume "logs"
Normal SuccessfulMountVolume 29m kubelet, k8s-appc2node2 MountVolume.SetUp succeeded for volume "dev-appc-data0"
Normal SuccessfulMountVolume 29m kubelet, k8s-appc2node2 MountVolume.SetUp succeeded for volume "onap-sdnc-svclogic-bin"
Normal SuccessfulMountVolume 29m kubelet, k8s-appc2node2 MountVolume.SetUp succeeded for volume "onap-sdnc-bin"
Normal SuccessfulMountVolume 29m kubelet, k8s-appc2node2 MountVolume.SetUp succeeded for volume "onap-appc-data-properties"
Normal SuccessfulMountVolume 29m kubelet, k8s-appc2node2 MountVolume.SetUp succeeded for volume "onap-sdnc-data-properties"
Normal SuccessfulMountVolume 29m kubelet, k8s-appc2node2 MountVolume.SetUp succeeded for volume "filebeat-conf"
Normal SuccessfulMountVolume 29m (x6 over 29m) kubelet, k8s-appc2node2 (combined from similar events): MountVolume.SetUp succeeded for volume "default-token-v9mnv"
Normal Pulling 29m kubelet, k8s-appc2node2 pulling image "oomk8s/readiness-check:2.0.0"
Normal Pulled 29m kubelet, k8s-appc2node2 Successfully pulled image "oomk8s/readiness-check:2.0.0"
Normal Created 29m kubelet, k8s-appc2node2 Created container
Normal Started 29m kubelet, k8s-appc2node2 Started container
Normal Pulling 29m kubelet, k8s-appc2node2 pulling image "nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest"
Normal Pulled 21m kubelet, k8s-appc2node2 Successfully pulled image "nexus3.onap.org:10001/onap/appc-image:1.3.0-SNAPSHOT-latest"
Normal Created 21m kubelet, k8s-appc2node2 Created container
Normal Started 21m kubelet, k8s-appc2node2 Started container
Normal Pulling 21m kubelet, k8s-appc2node2 pulling image "docker.elastic.co/beats/filebeat:5.5.0"
Normal Pulled 21m kubelet, k8s-appc2node2 Successfully pulled image "docker.elastic.co/beats/filebeat:5.5.0"
Normal Created 21m kubelet, k8s-appc2node2 Created container
Warning Unhealthy 5m (x16 over 21m) kubelet, k8s-appc2node2 Readiness probe failed: dial tcp 10.47.0.5:8181: getsockopt: connection refused |
...
Code Block |
---|
|
decrease appc pods to 1
$ kubectl scale statefulset dev-appc -n onap --replicas=1
statefulset "dev-appc" scaled
# verify that two APPC pods terminate with one APPC pod running
$ kubectl get pods --all-namespaces -a | grep dev-appc
onap dev-appc-0 2/2 Running 0 43m
onap dev-appc-1 2/2 Terminating 0 43m
onap dev-appc-2 2/2 Terminating 0 43m
onap dev-appc-cdt-8cbf9d4d9-mhp4b 1/1 Running 0 43m
onap dev-appc-db-0 2/2 Running 0 43m
onap dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 Running 0 43m
increase APPC pods to 3
$ kubectl scale statefulset dev-appc -n onap --replicas=3
statefulset "dev-appc" scaled
# verify that three APPC pods are running
$ kubectl get pods --all-namespaces -o wide | grep dev-appc
onap dev-appc-0 2/2 Running 0 49m 10.47.0.5 k8s-appc2node2
onap dev-appc-1 2/2 Running 0 3m 10.36.0.8 k8s-appc3node3
onap dev-appc-2 2/2 Running 0 3m 10.44.0.7 k8s-appc1node1
onap dev-appc-cdt-8cbf9d4d9-mhp4b 1/1 Running 0 49m 10.47.0.1 k8s-appc2node2
onap dev-appc-db-0 2/2 Running 0 49m 10.36.0.5 k8s-appc3node3
onap dev-appc-dgbuilder-54766c5b87-xw6c6 1/1 Running 0 49m 10.44.0.2 k8s-appc1node1
|