ubuntu@a-rke:~$ sudo rke up
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [rke.onap.cloud]
FATA[0000] Unsupported Docker version found [18.06.3-ce], supported versions are [1.11.x 1.12.x 1.13.x 17.03.x]
DI 20190225-2: RKE upgrade from 0.15 to 0.16 - not working
Does rke remove, regenerate the yaml (or hand upgrade the versions) then rke up
Code Block
theme
Midnight
ubuntu@a-rke:~$ sudo rke remove
Are you sure you want to remove Kubernetes cluster [y/n]: y
INFO[0002] Tearing down Kubernetes cluster
INFO[0002] [dialer] Setup tunnel for host [rke.onap.cloud]
INFO[0002] [worker] Tearing down Worker Plane..
INFO[0002] [remove/kubelet] Successfully removed container on host [rke.onap.cloud]
INFO[0003] [remove/kube-proxy] Successfully removed container on host [rke.onap.cloud]
INFO[0003] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud]
INFO[0003] [worker] Successfully tore down Worker Plane..
INFO[0003] [controlplane] Tearing down the Controller Plane..
INFO[0003] [remove/kube-apiserver] Successfully removed container on host [rke.onap.cloud]
INFO[0003] [remove/kube-controller-manager] Successfully removed container on host [rke.onap.cloud]
INFO[0004] [remove/kube-scheduler] Successfully removed container on host [rke.onap.cloud]
INFO[0004] [controlplane] Host [rke.onap.cloud] is already a worker host, skipping delete kubelet and kubeproxy.
INFO[0004] [controlplane] Successfully tore down Controller Plane..
INFO[0004] [etcd] Tearing down etcd plane..
INFO[0004] [remove/etcd] Successfully removed container on host [rke.onap.cloud]
INFO[0004] [etcd] Successfully tore down etcd plane..
INFO[0004] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0004] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0004] [hosts] Running cleaner container on host [rke.onap.cloud]
INFO[0005] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud]
INFO[0005] [hosts] Removing cleaner container on host [rke.onap.cloud]
INFO[0005] [hosts] Removing dead container logs on host [rke.onap.cloud]
INFO[0006] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud]
INFO[0006] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [hosts] Successfully cleaned up host [rke.onap.cloud]
INFO[0006] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0006] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0006] [hosts] Running cleaner container on host [rke.onap.cloud]
INFO[0007] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud]
INFO[0008] [hosts] Removing cleaner container on host [rke.onap.cloud]
INFO[0008] [hosts] Removing dead container logs on host [rke.onap.cloud]
INFO[0008] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud]
INFO[0009] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud]
INFO[0009] [hosts] Successfully cleaned up host [rke.onap.cloud]
INFO[0009] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0009] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0009] [hosts] Running cleaner container on host [rke.onap.cloud]
INFO[0010] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud]
INFO[0010] [hosts] Removing cleaner container on host [rke.onap.cloud]
INFO[0010] [hosts] Removing dead container logs on host [rke.onap.cloud]
INFO[0011] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud]
INFO[0011] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud]
INFO[0011] [hosts] Successfully cleaned up host [rke.onap.cloud]
INFO[0011] Removing local admin Kubeconfig: ./kube_config_cluster.yml
INFO[0011] Local admin Kubeconfig removed successfully
INFO[0011] Cluster removed successfully
ubuntu@a-rke:~$ rke config --name cluster.ym
ubuntu@a-rke:~$ sudo rke up
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [rke.onap.cloud]
INFO[0000] [network] Deploying port listener containers
INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [rke.onap.cloud]
INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [rke.onap.cloud]
INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [rke.onap.cloud]
INFO[0002] [network] Port listener containers deployed successfully
INFO[0002] [network] Running control plane -> etcd port checks
INFO[0003] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud]
INFO[0003] [network] Running control plane -> worker port checks
INFO[0004] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud]
INFO[0004] [network] Running workers -> control plane port checks
INFO[0005] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud]
INFO[0005] [network] Checking KubeAPI port Control Plane hosts
INFO[0005] [network] Removing port listener containers
INFO[0005] [remove/rke-etcd-port-listener] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [remove/rke-cp-port-listener] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [remove/rke-worker-port-listener] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [network] Port listener containers removed successfully
INFO[0006] [certificates] Attempting to recover certificates from backup on [etcd,controlPlane] hosts
INFO[0007] [certificates] No Certificate backup found on [etcd,controlPlane] hosts
INFO[0007] [certificates] Generating CA kubernetes certificates
INFO[0007] [certificates] Generating Kubernetes API server certficates
INFO[0008] [certificates] Generating Kube Controller certificates
INFO[0008] [certificates] Generating Kube Scheduler certificates
INFO[0008] [certificates] Generating Kube Proxy certificates
INFO[0009] [certificates] Generating Node certificate
INFO[0009] [certificates] Generating admin certificates and kubeconfig
INFO[0009] [certificates] Generating etcd-rke.onap.cloud certificate and key
INFO[0009] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates
INFO[0009] [certificates] Generating Kubernetes API server proxy client certificates
INFO[0010] [certificates] Temporarily saving certs to [etcd,controlPlane] hosts
INFO[0016] [certificates] Saved certs to [etcd,controlPlane] hosts
INFO[0016] [reconcile] Reconciling cluster state
INFO[0016] [reconcile] This is newly generated cluster
INFO[0016] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0022] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0022] Pre-pulling kubernetes images
INFO[0022] Kubernetes images pulled successfully
INFO[0022] [etcd] Building up etcd plane..
INFO[0023] [etcd] Successfully started [etcd] container on host [rke.onap.cloud]
INFO[0023] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [rke.onap.cloud]
INFO[0028] [certificates] Successfully started [rke-bundle-cert] container on host [rke.onap.cloud]
INFO[0029] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [rke.onap.cloud]
INFO[0029] [etcd] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0030] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0030] [etcd] Successfully started etcd plane..
INFO[0030] [controlplane] Building up Controller Plane..
INFO[0031] [controlplane] Successfully started [kube-apiserver] container on host [rke.onap.cloud]
INFO[0031] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [rke.onap.cloud]
INFO[0045] [healthcheck] service [kube-apiserver] on host [rke.onap.cloud] is healthy
INFO[0046] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0046] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0047] [controlplane] Successfully started [kube-controller-manager] container on host [rke.onap.cloud]
INFO[0047] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [rke.onap.cloud]
INFO[0052] [healthcheck] service [kube-controller-manager] on host [rke.onap.cloud] is healthy
INFO[0053] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0053] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0054] [controlplane] Successfully started [kube-scheduler] container on host [rke.onap.cloud]
INFO[0054] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [rke.onap.cloud]
INFO[0059] [healthcheck] service [kube-scheduler] on host [rke.onap.cloud] is healthy
INFO[0060] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0060] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0060] [controlplane] Successfully started Controller Plane..
INFO[0060] [authz] Creating rke-job-deployer ServiceAccount
INFO[0060] [authz] rke-job-deployer ServiceAccount created successfully
INFO[0060] [authz] Creating system:node ClusterRoleBinding
INFO[0060] [authz] system:node ClusterRoleBinding created successfully
INFO[0060] [certificates] Save kubernetes certificates as secrets
INFO[0060] [certificates] Successfully saved certificates as kubernetes secret [k8s-certs]
INFO[0060] [state] Saving cluster state to Kubernetes
INFO[0061] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state
INFO[0061] [state] Saving cluster state to cluster nodes
INFO[0061] [state] Successfully started [cluster-state-deployer] container on host [rke.onap.cloud]
INFO[0062] [remove/cluster-state-deployer] Successfully removed container on host [rke.onap.cloud]
INFO[0062] [worker] Building up Worker Plane..
INFO[0062] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud]
INFO[0063] [worker] Successfully started [kubelet] container on host [rke.onap.cloud]
INFO[0063] [healthcheck] Start Healthcheck on service [kubelet] on host [rke.onap.cloud]
INFO[0068] [healthcheck] service [kubelet] on host [rke.onap.cloud] is healthy
INFO[0069] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0070] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0070] [worker] Successfully started [kube-proxy] container on host [rke.onap.cloud]
INFO[0070] [healthcheck] Start Healthcheck on service [kube-proxy] on host [rke.onap.cloud]
INFO[0076] [healthcheck] service [kube-proxy] on host [rke.onap.cloud] is healthy
INFO[0076] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0077] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0077] [worker] Successfully started Worker Plane..
INFO[0077] [sync] Syncing nodes Labels and Taints
INFO[0077] [sync] Successfully synced nodes Labels and Taints
INFO[0077] [network] Setting up network plugin: canal
INFO[0077] [addons] Saving addon ConfigMap to Kubernetes
INFO[0077] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin
INFO[0077] [addons] Executing deploy job..
INFO[0082] [addons] Setting up KubeDNS
INFO[0082] [addons] Saving addon ConfigMap to Kubernetes
INFO[0082] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-kubedns-addon
INFO[0082] [addons] Executing deploy job..
INFO[0087] [addons] KubeDNS deployed successfully..
INFO[0087] [addons] Setting up Metrics Server
INFO[0087] [addons] Saving addon ConfigMap to Kubernetes
INFO[0087] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-metrics-addon
INFO[0087] [addons] Executing deploy job..
INFO[0092] [addons] KubeDNS deployed successfully..
INFO[0092] [ingress] Setting up nginx ingress controller
INFO[0092] [addons] Saving addon ConfigMap to Kubernetes
INFO[0092] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-ingress-controller
INFO[0092] [addons] Executing deploy job..
INFO[0097] [ingress] ingress controller nginx is successfully deployed
INFO[0097] [addons] Setting up user addons
INFO[0097] [addons] Checking for included user addons
WARN[0097] [addons] Unable to determine if is a file path or url, skipping
INFO[0097] [addons] Deploying rke-user-includes-addons
INFO[0097] [addons] Saving addon ConfigMap to Kubernetes
INFO[0097] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-user-includes-addons
INFO[0097] [addons] Executing deploy job..
WARN[0128] Failed to deploy addon execute job [rke-user-includes-addons]: Failed to get job complete status: <nil>
INFO[0128] Finished building Kubernetes cluster successfully
ubuntu@a-rke:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec26c4bd24b5 846921f0fe0e "/server" 10 minutes ago Up 10 minutes k8s_default-http-backend_default-http-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0
f8d5db205e14 8a7739f672b4 "/sidecar --v=2 --lo…" 10 minutes ago Up 10 minutes k8s_sidecar_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
490461545ae4 rancher/metrics-server-amd64 "/metrics-server --s…" 10 minutes ago Up 10 minutes k8s_metrics-server_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0
aaf03b62bd41 6816817d9dce "/dnsmasq-nanny -v=2…" 10 minutes ago Up 10 minutes k8s_dnsmasq_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
58ec007db72f 55ffe31ac578 "/kube-dns --domain=…" 10 minutes ago Up 10 minutes k8s_kubedns_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
0a95c06f6aa6 e183460c484d "/cluster-proportion…" 10 minutes ago Up 10 minutes k8s_autoscaler_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0
968a7c99b210 rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_default-http-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0
69969b331e49 rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0
baa5f03c16ff rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
82b2a9f640cb rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0
953a4d4be0c1 df4469c42185 "/usr/bin/dumb-init …" 10 minutes ago Up 10 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0
cce552840749 rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_nginx-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0
baa65f9c6f97 f0fad859c909 "/opt/bin/flanneld -…" 10 minutes ago Up 10 minutes k8s_kube-flannel_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
1736ce68f41a 9f355e076ea7 "/install-cni.sh" 10 minutes ago Up 10 minutes k8s_install-cni_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
615d3f702ee7 7eca10056c8e "start_runit" 10 minutes ago Up 10 minutes k8s_calico-node_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
1c4a702f0f18 rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
0da1cada08e1 rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 10 minutes ago Up 10 minutes kube-proxy
57f44998f34a rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 11 minutes ago Up 11 minutes kubelet
50f424c4daec rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 11 minutes ago Up 11 minutes kube-scheduler
502d327912d9 rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 11 minutes ago Up 11 minutes kube-controller-manager
9fc706bbf3a5 rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 11 minutes ago Up 11 minutes kube-apiserver
2e7630c2047c rancher/coreos-etcd:v3.2.18 "/usr/local/bin/etcd…" 11 minutes ago Up 11 minutes etcd
fef566337eb6 rancher/rke-tools:v0.1.15 "/opt/rke-tools/rke-…" 26 minutes ago Up 26 minutes etcd-rolling-snapshots
amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx default-http-backend-797c5bc547-m8hbx 1/1 Running 0 1h
ingress-nginx nginx-ingress-controller-2v7w7 1/1 Running 0 1h
kube-system canal-thmfg 3/3 Running 0 1h
kube-system kube-dns-7588d5b5f5-j66s8 3/3 Running 0 1h
kube-system kube-dns-autoscaler-5db9bbb766-rg5n8 1/1 Running 0 1h
kube-system metrics-server-97bc649d5-jd2rr 1/1 Running 0 1h
kube-system rke-ingress-controller-deploy-job-znp9n 0/1 Completed 0 1h
kube-system rke-kubedns-addon-deploy-job-dzxsj 0/1 Completed 0 1h
kube-system rke-metrics-addon-deploy-job-gpm4j 0/1 Completed 0 1h
kube-system rke-network-plugin-deploy-job-kqdds 0/1 Completed 0 1h
kube-system tiller-deploy-69458576b-khgr5 1/1 Running 0 1h
DI 20190226-1: RKE up segmentation fault on 0.1.16 - use correct user
Code Block
theme
Midnight
amdocs@obriensystemsu0:~$ sudo rke up
Segmentation fault (core dumped)
# issue was I was using ubuntu as the yml user not amdocs in this case for a particular VM
DI 20190225-2: RKE upgrade from 0.15 to 0.16 - not working
Does rke remove, regenerate the yaml (or hand upgrade the versions) then rke up
Code Block
theme
Midnight
ubuntu@a-rke:~$ sudo rke remove
Are you sure you want to remove Kubernetes cluster [y/n]: y
INFO[0002] Tearing down Kubernetes clusteronap-aai-aai-babel-8c755bcfc-kmzdm 2/2 Running 0INFO[0002] [dialer] Setup tunnel for host [rke.onap.cloud]
INFO[0002] [worker] Tearing down Worker Plane.. 29m 10.42.5.8 18.220.70.253<none>INFO[0002][remove/kubelet]Successfullyremovedcontaineron<none>host [rke.onap.cloud]INFO[0003][remove/kube-proxy]Successfullyremovedcontaineronhost[rke.onap.cloud]INFO[0003] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud]
INFO[0003] [worker] Successfully tore down Worker Plane..
INFO[0003] [controlplane] Tearing down the Controller Plane..
INFO[0003] [remove/kube-apiserver] Successfully removed container on host [rke.onap.cloud]
INFO[0003] [remove/kube-controller-manager] Successfully removed container on host [rke.onap.cloud]
INFO[0004] [remove/kube-scheduler] Successfully removed container on host [rke.onap.cloud]
INFO[0004] [controlplane] Host [rke.onap.cloud] is already a worker host, skipping delete kubelet and kubeproxy.
INFO[0004] [controlplane] Successfully tore down Controller Plane..
INFO[0004] [etcd] Tearing down etcd plane.. onap-aai-aai-champ-78b9d7f68b-98tm9 0/2 Init:0/1 2 29m 10.42.3.6 3.17.76.33 <none> <none>
onap onap-aai-aai-data-router-64fcfbc5bb-wkkvz INFO[0004] [remove/etcd] Successfully removed container on host [rke.onap.cloud]
INFO[0004] [etcd] Successfully tore down etcd plane..
INFO[0004] [hosts] Cleaning up host [rke.onap.cloud] 1/2 CrashLoopBackOff 9 29m 10.42.5.7 INFO[0004] [hosts] Cleaning up host [rke.onap.cloud] 18.220.70.253 <none>INFO[0004][hosts]Runningcleanercontaineronhost<none>[rke.onap.cloud]onap INFO[0005][kube-cleaner]Successfullystarted[kube-cleaner]containeronhost[rke.onap.cloud]
INFO[0005] [hosts] Removing cleaner container on host [rke.onap.cloud]
INFO[0005] [hosts] Removing dead container logs on host [rke.onap.cloud]
INFO[0006] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud]
INFO[0006] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [hosts] Successfully cleaned up host [rke.onap.cloud]
INFO[0006] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0006] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0006] [hosts] Running cleaner container on host [rke.onap.cloud]
INFO[0007] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud]
INFO[0008] [hosts] Removing cleaner container on host [rke.onap.cloud]
INFO[0008] [hosts] Removing dead container logs on host [rke.onap.cloud]
INFO[0008] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud]
INFO[0009] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud]
INFO[0009] [hosts] Successfully cleaned up host [rke.onap.cloud]
INFO[0009] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0009] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0009] [hosts] Running cleaner container on host [rke.onap.cloud]
INFO[0010] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud]
INFO[0010] [hosts] Removing cleaner container on host [rke.onap.cloud]
INFO[0010] [hosts] Removing dead container logs on host [rke.onap.cloud]
INFO[0011] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud]
INFO[0011] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud]
INFO[0011] [hosts] Successfully cleaned up host [rke.onap.cloud]
INFO[0011] Removing local admin Kubeconfig: ./kube_config_cluster.yml
INFO[0011] Local admin Kubeconfig removed successfully
INFO[0011] Cluster removed successfully
ubuntu@a-rke:~$ rke config --name cluster.ym
ubuntu@a-rke:~$ sudo rke up
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [rke.onap.cloud]
INFO[0000] [network] Deploying port listener containers
INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [rke.onap.cloud]
INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [rke.onap.cloud]
INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [rke.onap.cloud]
INFO[0002] [network] Port listener containers deployed successfully
INFO[0002] [network] Running control plane -> etcd port checks
INFO[0003] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud]
INFO[0003] [network] Running control plane -> worker port checks
INFO[0004] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud]
INFO[0004] [network] Running workers -> control plane port checks
INFO[0005] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud]
INFO[0005] [network] Checking KubeAPI port Control Plane hosts
INFO[0005] [network] Removing port listener containers
INFO[0005] [remove/rke-etcd-port-listener] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [remove/rke-cp-port-listener] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [remove/rke-worker-port-listener] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [network] Port listener containers removed successfully
INFO[0006] [certificates] Attempting to recover certificates from backup on [etcd,controlPlane] hosts
INFO[0007] [certificates] No Certificate backup found on [etcd,controlPlane] hosts
INFO[0007] [certificates] Generating CA kubernetes certificates
INFO[0007] [certificates] Generating Kubernetes API server certficates
INFO[0008] [certificates] Generating Kube Controller certificates
INFO[0008] [certificates] Generating Kube Scheduler certificates
INFO[0008] [certificates] Generating Kube Proxy certificates
INFO[0009] [certificates] Generating Node certificate
INFO[0009] [certificates] Generating admin certificates and kubeconfig
INFO[0009] [certificates] Generating etcd-rke.onap.cloud certificate and key
INFO[0009] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates
INFO[0009] [certificates] Generating Kubernetes API server proxy client certificates
INFO[0010] [certificates] Temporarily saving certs to [etcd,controlPlane] hosts
INFO[0016] [certificates] Saved certs to [etcd,controlPlane] hosts
INFO[0016] [reconcile] Reconciling cluster state
INFO[0016] [reconcile] This is newly generated cluster
INFO[0016] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0022] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0022] Pre-pulling kubernetes imagesonap-aai-aai-elasticsearch-6dcf5d9966-j7z67 1/1 Running 0 29m 10.42.4.4 18.188.214.137 <none> <none>
onap onap-aai-aai-gizmo-5bddb87589-zn8pl 2/2 Running 0 29m 10.42.4.3 18.188.214.137 <none> <none>
onap onap-aai-aai-graphadmin-774f9d698f-f8lwv 0/2 Init:0/1 2 29m 10.42.5.4 18.220.70.253 <none> <none>
onap onap-aai-aai-graphadmin-create-db-schema-94q4l 0/1 Init:Error 0 18m 10.42.4.16 18.188.214.137 <none> <none>
onap onap-aai-aai-graphadmin-create-db-schema-s42pq 0/1 Init:0/1 0 7m54s 10.42.5.20 18.220.70.253 <none> <none>
onap onap-aai-aai-graphadmin-create-db-schema-tsvcw 0/1 Init:Error 0 29m 10.42.4.5 18.188.214.137 <none> <none>
onap onap-aai-aai-modelloader-845fc684bd-r7mdw 2/2 Running 0 29m 10.42.4.6 18.188.214.137 <none> <none>
onap onap-aai-aai-resources-67f8dfcbdb-kz6cp 0/2 Init:0/1 2 29m 10.42.5.11 18.220.70.253 <none> <none>onapINFO[0022]Kubernetesimagespulledsuccessfully onap-aai-aai-schema-service-6c56b45b7c-7zlfzINFO[0022][etcd]Buildingupetcd plane.. 2/2RunningINFO[0023][etcd]Successfully0started[etcd]containeronhost[rke.onap.cloud]INFO[0023][etcd]Saving29msnapshot[etcd-rolling-snapshots]onhost[rke.onap.cloud]
INFO[0028] [certificates] Successfully started [rke-bundle-cert] container on host [rke.onap.cloud]
INFO[0029] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [rke.onap.cloud]
INFO[0029] [etcd] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0030] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0030] [etcd] Successfully started etcd plane..
INFO[0030] [controlplane] Building up Controller Plane..
INFO[0031] [controlplane] Successfully started [kube-apiserver] container on host [rke.onap.cloud]
INFO[0031] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [rke.onap.cloud]
INFO[0045] [healthcheck] service [kube-apiserver] on host [rke.onap.cloud] is healthy
INFO[0046] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0046] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0047] [controlplane] Successfully started [kube-controller-manager] container on host [rke.onap.cloud]
INFO[0047] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [rke.onap.cloud]
INFO[0052] [healthcheck] service [kube-controller-manager] on host [rke.onap.cloud] is healthy
INFO[0053] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0053] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0054] [controlplane] Successfully started [kube-scheduler] container on host [rke.onap.cloud]
INFO[0054] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [rke.onap.cloud]
INFO[0059] [healthcheck] service [kube-scheduler] on host [rke.onap.cloud] is healthy
INFO[0060] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0060] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0060] [controlplane] Successfully started Controller Plane..
INFO[0060] [authz] Creating rke-job-deployer ServiceAccount
INFO[0060] [authz] rke-job-deployer ServiceAccount created successfully
INFO[0060] [authz] Creating system:node ClusterRoleBinding
INFO[0060] [authz] system:node ClusterRoleBinding created successfully
INFO[0060] [certificates] Save kubernetes certificates as secrets
INFO[0060] [certificates] Successfully saved certificates as kubernetes secret [k8s-certs]
INFO[0060] [state] Saving cluster state to Kubernetes
INFO[0061] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state
INFO[0061] [state] Saving cluster state to cluster nodes
INFO[0061] [state] Successfully started [cluster-state-deployer] container on host [rke.onap.cloud]
INFO[0062] [remove/cluster-state-deployer] Successfully removed container on host [rke.onap.cloud]
INFO[0062] [worker] Building up Worker Plane..
INFO[0062] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud]
INFO[0063] [worker] Successfully started [kubelet] container on host [rke.onap.cloud]
INFO[0063] [healthcheck] Start Healthcheck on service [kubelet] on host [rke.onap.cloud]
INFO[0068] [healthcheck] service [kubelet] on host [rke.onap.cloud] is healthy
INFO[0069] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0070] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0070] [worker] Successfully started [kube-proxy] container on host [rke.onap.cloud]
INFO[0070] [healthcheck] Start Healthcheck on service [kube-proxy] on host [rke.onap.cloud]
INFO[0076] [healthcheck] service [kube-proxy] on host [rke.onap.cloud] is healthy
INFO[0076] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0077] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0077] [worker] Successfully started Worker Plane..
INFO[0077] [sync] Syncing nodes Labels and Taints
INFO[0077] [sync] Successfully synced nodes Labels and Taints
INFO[0077] [network] Setting up network plugin: canal
INFO[0077] [addons] Saving addon ConfigMap to Kubernetes
INFO[0077] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin
INFO[0077] [addons] Executing deploy job.. 10.42.3.7 3.17.76.33 <none> <none>
onap onap-aai-aai-search-data-5d8d7759b8-flxwj 2/2 Running 0 29m 10.42.3.9 3.17.76.33 <none> <none>
onap onap-aai-aai-sparky-be-8444df749c-mzc2n 0/2 Init:0/1 0 29m 10.42.4.11 18.188.214.137 <none> <none>
onap onap-aai-aai-spike-54ff77787f-d678x 2/2 Running 0 29m 10.42.5.6 18.220.70.253 <none> <none>
onap onap-aai-aai-traversal-6ff868f477-lzv2f 0/2 Init:0/1 2 29m 10.42.3.8 3.17.76.33 <none> <none>
onap onap-aai-aai-traversal-update-query-data-9g2b8 0/1 Init:0/1 2 29m 10.42.5.12 18.220.70.253 <none> <none>
onap onap-dmaap-dbc-pg-0 1/1 Running 029mINFO[0082][addons] Setting up KubeDNS 10.42.5.9 18.220.70.253<none>INFO[0082][addons]Saving<none>addononapConfigMaptoKubernetesINFO[0082][addons]SuccessfullySavedaddontoKubernetes ConfigMap: rke-kubedns-addon
INFO[0082] [addons] Executing deploy job.. onap-dmaap-dbc-pg-1 INFO[0087][addons]KubeDNSdeployedsuccessfully..INFO[0087][addons]SettingupMetricsServer1/1RunningINFO[0087][addons]SavingaddonConfigMaptoKubernetes0INFO[0087][addons]SuccessfullySavedaddontoKubernetesConfigMap:rke-metrics-addon26mINFO[0087][addons]Executingdeployjob10.42.3.14 3.17.76.33<none>INFO[0092] [addons] KubeDNS deployed successfully..
INFO[0092] [ingress] Setting up nginx ingress controller
INFO[0092] [addons] Saving addon ConfigMap to Kubernetes
INFO[0092] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-ingress-controller
INFO[0092] [addons] Executing deploy job.. <none>
onap onap-dmaap-dbc-pgpool-8666b57857-97zjc 1/1 RunningINFO[0097] [ingress] ingress controller nginx is successfully deployed
INFO[0097] [addons] Setting up user addons 0 29m 10.42.5.518.220.70.253<none>INFO[0097][addons]Checkingforincludeduseraddons<none>WARN[0097] [addons] Unable to determine if is a file path or url, skipping
INFO[0097] [addons] Deploying rke-user-includes-addons
INFO[0097] [addons] Saving addon ConfigMap to Kubernetes
INFO[0097] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-user-includes-addons
INFO[0097] [addons] Executing deploy job..onap onap-dmaap-dbc-pgpool-8666b57857-vr8gk 1/1 Running 0WARN[0128]29mFailedtodeployaddon execute job [rke-user-includes-addons]: Failed to get job complete status: <nil>
INFO[0128] Finished building Kubernetes cluster successfully
ubuntu@a-rke:~$ sudo docker ps
CONTAINER ID 10.42.4.8 18.188.214.137 <none> <none>
onap IMAGEonap-dmaap-dmaap-bc-745995bf74-m6hhq0/1Init:0/2COMMAND2CREATED29m10.42.4.1218.188.214.137STATUS<none><none>PORTSonaponap-dmaap-dmaap-bc-post-install-6ff4jNAMESec26c4bd24b5846921f0fe0e1/1Running0"/server"29m10.42.4.9minutesagoUp 10 minutes 18.188.214.137 <none><none>onap k8s_default-http-backend_default-http-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0
f8d5db205e14onap-dmaap-dmaap-dr-db-0 8a7739f672b4 "1/sidecar1--v=2--lo…"10RunningminutesagoUp10minutes029m10.42.4.10 k8s_sidecar_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
490461545ae418.188.214.137 <none> <none>onaprancher/metrics-server-amd64"/metrics-server --s…"onap-dmaap-dmaap-dr-db-1 10minutesagoUp10minutes1/1Runningk8s_metrics-server_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0aaf03b62bd4116816817d9dce24m10.42.5.1518.220.70.253<none>"/dnsmasq-nanny-v=2…"10minutesago<none>onapUp10minutesonap-dmaap-dmaap-dr-node-0k8s_dnsmasq_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
58ec007db72f2/255ffe31ac578Running029m"/kube-dns--domain=…" 10 minutes ago.42.3.11 3.17.76.33Up10minutes<none><none>onapk8s_kubedns_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
0a95c06f6aa6onap-dmaap-dmaap-dr-prov-fbf6c94f5-v9bmq e183460c484d2/2Running0"/cluster-proportion…"10minutesago29mUp 10 minutes.42.5.10 18.220.70.253 <none><none>onapk8s_autoscaler_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0
968a7c99b210onap-dmaap-message-router-0 rancher/pause-amd64:3.1"/pause"1/1 Running010minutesago29mUp 10 minutes.42.4.14 18.188.214.137<none><none>onapk8s_POD_default-http-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0
69969b331e49onap-dmaap-message-router-kafka-0 rancher/pause-amd64:3.1"/pause"1/1 Running 110minutesago29mUp 10 minutes.42.5.13 18.220.70.253 <none><none>onapk8s_POD_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0
baa5f03c16ffonap-dmaap-message-router-kafka-1 rancher/pause-amd64:3.1"/pause"1/1 Running110minutesago29mUp 10 minutes.42.3.13 3.17.76.33<none><none>onapk8s_POD_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_082b2a9f640cbrancher/pause-amd64:3.1onap-dmaap-message-router-kafka-2"/pause"1/1Running10minutesagoUp10minutes029m10.42.4.1518.188.214.137 k8s_POD_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0
953a4d4be0c1<none> df4469c42185<none>onaponap-dmaap-message-router-mirrormaker-8587c4c9cf-lfnd8 0/1CrashLoopBackOff9"/usr/bin/dumb-init…"10minutesago29mUp 10 minutes.42.4.7 18.188.214.137<none><none>onapk8s_nginxonap-ingressdmaap-controller_nginxmessage-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0
cce552840749router-zookeeper-0 rancher/pause-amd64:3.11/1Running"/pause"010minutesago29mUp 10.42.5.14minutes18.220.70.253 <none><none>onapk8s_POD_nginx-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0
baa65f9c6f97 onap-dmaap-message-router-zookeeper-1 f0fad859c9091/1Running"/opt/bin/flanneld-…"010minutesagoUp1029mminutes10.42.4.1318.188.214.137<none> k8s_kube-flannel_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
1736ce68f41a<none>
onap 9f355e076ea7onap-dmaap-message-router-zookeeper-2 "/install-cni.sh"1/1Running10minutesagoUp100minutes29m10.42.3.123.17.76.33k8s_install-cni_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
615d3f702ee7 <none>7eca10056c8e<none>onaponap-nfs-provisioner-nfs-provisioner-57c999dc57-mdcw5 1/1Running"start_runit"010minutesago24mUp10minutes10.42.3.153.17.76.33<none>k8s_calico-node_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
1c4a702f0f18<none>
onaprancher/pause-amd64:3.1onap-robot-robot-677bdbb454-zj9jk"/pause"1/110minutesagoRunningUp10minutes024m10.42.5.16 k8s_POD_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
0da1cada08e1 18.220.70.253 <none>rancher/hyperkube:v1.11.6-rancher1"/opt/rke-tools/entr…" 10 minutes ago <none>
onap Up10minutes onap-so-so-8569947cbd-jn5x4kube-proxy57f44998f34a0/1rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 11 minutes agoInit:0/1 1 Up11minutes13m10.42.4.1918.188.214.137<none><none>kubeletonap50f424c4daecrancher/hyperkube:v1.11.6-rancher1"/opt/rke-tools/entr…"onap-so-so-bpmn-infra-78c8fd665d-b47qn 11minutesagoUp11minutes0/1Init:0/11kube-scheduler502d327912d913mrancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 11 minutes ago10.42.3.16 3.17.76.33 <none>Up11minutes<none>onaponap-so-so-catalog-db-adapter-565f9767ff-lvbgxkube-controller-manager9fc706bbf3a50/1rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…"Init:0/1 11minutesago1Up11minutes13m10.42.3.173.17.76.33<none>kube-apiserver2e7630c2047c<none>onaprancher/coreos-etcd:v3.2.18 "/usr/local/bin/etcd…"onap-so-so-mariadb-config-job-d9sdb 11minutesagoUp11minutes0/1Init:0/20etcdfef566337eb63m37srancher/rke-tools:v0.1.15 10.42.3.20 3.17.76.33"/opt/rke-tools/rke-…" <none>26minutesagoUp26<none>minutesonaponap-so-so-mariadb-config-job-rkqdletcd-rolling-snapshotsamdocs@obriensystemsu0:~$kubectlgetpods --all-namespaces
NAMESPACE 0/1 Init:ErrorNAME013m10.42.5.1918.220.70.253<none>READYSTATUS<none>onapRESTARTSAGEingress-nginxdefaultonap-so-httpso-backendmonitoring-797c5bc547-m8hbx69b9fdd94c-dks4v 1/1 RunningRunning001hingress-nginxnginx-ingress-controller-2v7w713m10.42.4.181/1 18.188.214.137<none>Running0<none>onap1hkube-systemcanal-thmfgonap-so-so-openstack-adapter-5f9cf896d7-mgbdd0/1Init:0/113/3Running13m010.42.4.1718.188.214.1371h<none>kube-systemkube-dns-7588d5b5f5-j66s8<none>onap3/3 onap-so-so-request-db-adapter-5c9bfd7b57-2krnpRunning 0/1Init:0/11hkube-systemkube-dns-autoscaler-5db9bbb766-rg5n811/113mRunning10.42.3.1803.17.76.331hkube-system<none>metrics-server-97bc649d5-jd2rr<none>onap1/1Runningonap-so-so-sdc-controller-6fb5cf5775-bsxhm01hkube-system0/1 rke-ingress-controller-deploy-job-znp9nInit:0/1 Completed011hkube-system13mrke-kubedns-addon-deploy-job-dzxsj10.42.4.20 0/118.188.214.137 <none>Completed0<none>onap1hkube-systemrkeonap-so-metricsso-addonsdnc-deployadapter-job-gpm4j8555689c75-r6vkb 0/11/1CompletedRunning01h0kube-systemrke-network-plugin-deploy-job-kqdds13m0/110.42.5.18Completed18.220.70.2530<none>1hkube-system<none>onaptiller-deploy-69458576b-khgr5 1/1onap-so-so-vfc-adapter-68fccc8bb8-c56t2 Running00/11h
DI 20190226-1: RKE up segmentation fault on 0.1.16 - use correct user
Code Block
theme
Midnight
amdocs@obriensystemsu0:~$ sudo rke up
Segmentation fault (core dumped)
# issue was I was using ubuntu as the yml user not amdocs in this case for a particular VM
DI 20190227-1: Verify no 110 pod limit per VM
DI 20190228-1: deploy casablanca MR to RKE under K8S 1.11.6, Docker 18.06, Helm 2.12.3
Code Block
theme
Midnight
sudo wget https://git.onap.org/oom/plain/kubernetes/onap/resources/environments/dev.yaml
sudo cp dev.yaml dev0.yaml
sudo vi dev0.yaml
sudo cp dev0.yaml dev1.yaml
sudo cp logging-analytics/deploy/cd.sh .
sudo ./cd.sh -b casablanca -e onap -p false nexus3.onap.org:10001 -f true -s 300 -c true -d false -w false -r false
no good for helm 2.12.3 deployment - just using 2.9.1 for now -
Error: Chart incompatible with Tiller v2.12.3
in the casablanca branch only - flip
https://git.onap.org/oom/tree/kubernetes/onap/Chart.yaml?h=casablanca#n24
tillerVersion: "~2.9.1"Init:0/1 1 13m 10.42.4.21 18.188.214.137 <none> <none>
onap onap-so-so-vnfm-adapter-65c4c5944b-72nlf 1/1 Running 0 13m 10.42.3.19 3.17.76.33 <none> <none>
# on worker nodes only
# nfs client
DI 20190507: ARM support using RKE 0.2.1 ARM friendly install