Table of Contents |
---|
Tracking
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
...
Code Block | ||
---|---|---|
| ||
# on your laptop/where your cert is # chmod 777 your cert before you scp it over obrienbiometrics:full michaelobrien$ scp ~/wse_onap/onap_rsa ubuntu@rke0.onap.info:~/ # on the host sudo cp onap_rsa ~/.ssh sudo chmod 400 ~/.ssh/onap_rsa sudo chown ubuntu:ubuntu ~/.ssh/onap_rsa # just verify sudo vi ~/.ssh/authorized_keys git clone --recurse-submodules https://gerrit.onap.org/r/oom sudo cp oom/kubernetes/contrib/tools/rke/rke_setup.sh . sudo nohup ./rke_setup.sh -b master -s 104.209.161.210 -e onap -k onap_rsa -l ubuntu & ubuntu@a-rke0-master:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx default-http-backend-797c5bc547-55fpn 1/1 Running 0 4m ingress-nginx nginx-ingress-controller-znhgz 1/1 Running 0 4m kube-system canal-dqt2m 3/3 Running 0 5m kube-system kube-dns-7588d5b5f5-pzdfh 3/3 Running 0 5m kube-system kube-dns-autoscaler-5db9bbb766-b7vvg 1/1 Running 0 5m kube-system metrics-server-97bc649d5-fmqjd 1/1 Running 0 4m kube-system rke-ingress-controller-deploy-job-dxmbd 0/1 Completed 0 4m kube-system rke-kubedns-addon-deploy-job-wqccp 0/1 Completed 0 5m kube-system rke-metrics-addon-deploy-job-ssrgp 0/1 Completed 0 4m kube-system rke-network-plugin-deploy-job-jkffq 0/1 Completed 0 5m kube-system tiller-deploy-759cb9df9-rlt7v 1/1 Running 0 2m ubuntu@a-rke0-master:~$ helm list |
...
Installing on 6 nodes on AWS (windriver is having an issue right now).
We are good on RKE 0.2.1, Ubuntu 18.04, kubernetes 104 / Kubernetes/kubectl 1.13.5 / Helm 2.13.4, 1 / Docker 18.09, Helm 2.12.5.5
https://github.com/rancher/rke/releases RKE 0.2.2 has experimental k8s 1.14 support - running with 0.2.1 for now
Just need to test onap deployments
I'll do the NFS/EFS later before deployment
Code Block | ||
---|---|---|
| ||
# on all VMs (control, etcd, worker) # move the key to all vms scp ~/wse_onap/onap_rsa ubuntu@rke0.onap.info:~/ # on a non-worker node sudo curl https://releases.rancher.com/install-docker/18.09.sh | sh sudo usermod -aG docker ubuntu # nfs server # on control/etcd nodes only # from script sudo wget https://github.com/rancher/rke/releases/download/v0.2.1/rke_linux-amd64 mv rke_linux-amd64 rke sudo chmod +x rke sudo mv ./rke /usr/local/bin/rke # one time setup of the yaml or use the generated one ubuntu@ip-172-31-38-182:~$ sudo rke config [+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: ~/.ssh/onap_rsa [+] Number of Hosts [1]: 6 [+] SSH Address of host (1) [none]: 3.14.102.175 [+] SSH Port of host (1) [22]: [+] SSH Private Key Path of host (3.14.102.175) [none]: ~/.ssh/onap_rsa [+] SSH User of host (3.14.102.175) [ubuntu]: ubuntu [+] Is host (3.14.102.175) a Control Plane host (y/n)? [y]: y [+] Is host (3.14.102.175) a Worker host (y/n)? [n]: n [+] Is host (3.14.102.175) an etcd host (y/n)? [n]: y [+] Override Hostname of host (3.14.102.175) [none]: [+] Internal IP of host (3.14.102.175) [none]: [+] Docker socket path on host (3.14.102.175) [/var/run/docker.sock]: [+] SSH Address of host (2) [none]: 18.220.62.6 [+] SSH Port of host (2) [22]: [+] SSH Private Key Path of host (18.220.62.6) [none]: ~/.ssh/onap_rsa [+] SSH User of host (18.220.62.6) [ubuntu]: [+] Is host (18.220.62.6) a Control Plane host (y/n)? [y]: y [+] Is host (18.220.62.6) a Worker host (y/n)? [n]: n [+] Is host (18.220.62.6) an etcd host (y/n)? [n]: y [+] Override Hostname of host (18.220.62.6) [none]: [+] Internal IP of host (18.220.62.6) [none]: [+] Docker socket path on host (18.220.62.6) [/var/run/docker.sock]: [+] SSH Address of host (3) [none]: 18.217.96.12 [+] SSH Port of host (3) [22]: [+] SSH Private Key Path of host (18.217.96.12) [none]: ~/.ssh/onap_rsa [+] SSH User of host (18.217.96.12) [ubuntu]: [+] Is host (18.217.96.12) a Control Plane host (y/n)? [y]: y [+] Is host (18.217.96.12) a Worker host (y/n)? [n]: n [+] Is host (18.217.96.12) an etcd host (y/n)? [n]: y [+] Override Hostname of host (18.217.96.12) [none]: [+] Internal IP of host (18.217.96.12) [none]: [+] Docker socket path on host (18.217.96.12) [/var/run/docker.sock]: [+] SSH Address of host (4) [none]: 18.188.214.137 [+] SSH Port of host (4) [22]: [+] SSH Private Key Path of host (18.188.214.137) [none]: ~/.ssh/onap_rsa [+] SSH User of host (18.188.214.137) [ubuntu]: [+] Is host (18.188.214.137) a Control Plane host (y/n)? [y]: n [+] Is host (18.188.214.137) a Worker host (y/n)? [n]: y [+] Is host (18.188.214.137) an etcd host (y/n)? [n]: n [+] Override Hostname of host (18.188.214.137) [none]: [+] Internal IP of host (18.188.214.137) [none]: [+] Docker socket path on host (18.188.214.137) [/var/run/docker.sock]: [+] SSH Address of host (5) [none]: 18.220.70.253 [+] SSH Port of host (5) [22]: [+] SSH Private Key Path of host (18.220.70.253) [none]: ~/.ssh/onap_rsa [+] SSH User of host (18.220.70.253) [ubuntu]: [+] Is host (18.220.70.253) a Control Plane host (y/n)? [y]: n [+] Is host (18.220.70.253) a Worker host (y/n)? [n]: y [+] Is host (18.220.70.253) an etcd host (y/n)? [n]: n [+] Override Hostname of host (18.220.70.253) [none]: [+] Internal IP of host (18.220.70.253) [none]: [+] Docker socket path on host (18.220.70.253) [/var/run/docker.sock]: [+] SSH Address of host (6) [none]: 3.17.76.33 [+] SSH Port of host (6) [22]: [+] SSH Private Key Path of host (3.17.76.33) [none]: ~/.ssh/onap_rsa [+] SSH User of host (3.17.76.33) [ubuntu]: [+] Is host (3.17.76.33) a Control Plane host (y/n)? [y]: n [+] Is host (3.17.76.33) a Worker host (y/n)? [n]: y [+] Is host (3.17.76.33) an etcd host (y/n)? [n]: n [+] Override Hostname of host (3.17.76.33) [none]: [+] Internal IP of host (3.17.76.33) [none]: [+] Docker socket path on host (3.17.76.33) [/var/run/docker.sock]: [+] Network Plugin Type (flannel, calico, weave, canal) [canal]: [+] Authentication Strategy [x509]: [+] Authorization Mode (rbac, none) [rbac]: [+] Kubernetes Docker image [rancher/hyperkube:v1.13.5-rancher1]: [+] Cluster domain [cluster.local]: [+] Service Cluster IP Range [10.43.0.0/16]: [+] Enable PodSecurityPolicy [n]: [+] Cluster Network CIDR [10.42.0.0/16]: [+] Cluster DNS Service IP [10.43.0.10]: [+] Add addon manifest URLs or YAML files [no]: # new [+] Cluster domain [cluster.local]: ubuntu@ip-172-31-38-182:~$ sudo rke up INFO[0000] Initiating Kubernetes cluster INFO[0000] [certificates] Generating CA kubernetes certificates INFO[0000] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates INFO[0000] [certificates] Generating Kubernetes API server certificates INFO[0000] [certificates] Generating Service account token key INFO[0000] [certificates] Generating etcd-3.14.102.175 certificate and key INFO[0000] [certificates] Generating etcd-18.220.62.6 certificate and key INFO[0001] [certificates] Generating etcd-18.217.96.12 certificate and key INFO[0001] [certificates] Generating Kube Controller certificates INFO[0001] [certificates] Generating Kube Scheduler certificates INFO[0001] [certificates] Generating Kube Proxy certificates INFO[0001] [certificates] Generating Node certificate INFO[0001] [certificates] Generating admin certificates and kubeconfig INFO[0001] [certificates] Generating Kubernetes API server proxy client certificates INFO[0002] Successfully Deployed state file at [./cluster.rkestate] INFO[0002] Building Kubernetes cluster INFO[0002] [dialer] Setup tunnel for host [3.14.102.175] INFO[0002] [dialer] Setup tunnel for host [18.188.214.137] INFO[0002] [dialer] Setup tunnel for host [18.220.70.253] INFO[0002] [dialer] Setup tunnel for host [18.220.62.6] INFO[0002] [dialer] Setup tunnel for host [18.217.96.12] INFO[0002] [dialer] Setup tunnel for host [3.17.76.33] INFO[0002] [network] Deploying port listener containers INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.220.62.6] INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [3.14.102.175] INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.217.96.12] INFO[0006] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.220.62.6] INFO[0006] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.217.96.12] INFO[0006] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [3.14.102.175] INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [18.220.62.6] INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [18.217.96.12] INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [3.14.102.175] INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [18.217.96.12] INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [18.220.62.6] INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [3.14.102.175] INFO[0008] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.188.214.137] INFO[0008] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.220.70.253] INFO[0008] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [3.17.76.33] INFO[0012] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [3.17.76.33] INFO[0012] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.220.70.253] INFO[0012] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.188.214.137] INFO[0013] [network] Successfully started [rke-worker-port-listener] container on host [3.17.76.33] INFO[0013] [network] Successfully started [rke-worker-port-listener] container on host [18.220.70.253] INFO[0013] [network] Successfully started [rke-worker-port-listener] container on host [18.188.214.137] INFO[0013] [network] Port listener containers deployed successfully INFO[0013] [network] Running etcd <-> etcd port checks INFO[0013] [network] Successfully started [rke-port-checker] container on host [18.220.62.6] INFO[0013] [network] Successfully started [rke-port-checker] container on host [3.14.102.175] INFO[0013] [network] Successfully started [rke-port-checker] container on host [18.217.96.12] INFO[0014] [network] Running control plane -> etcd port checks INFO[0014] [network] Successfully started [rke-port-checker] container on host [18.220.62.6] INFO[0014] [network] Successfully started [rke-port-checker] container on host [3.14.102.175] INFO[0014] [network] Successfully started [rke-port-checker] container on host [18.217.96.12] INFO[0014] [network] Running control plane -> worker port checks INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.220.62.6] INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.217.96.12] INFO[0015] [network] Successfully started [rke-port-checker] container on host [3.14.102.175] INFO[0015] [network] Running workers -> control plane port checks INFO[0015] [network] Successfully started [rke-port-checker] container on host [3.17.76.33] INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.220.70.253] INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.188.214.137] INFO[0016] [network] Checking KubeAPI port Control Plane hosts INFO[0016] [network] Removing port listener containers INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [18.220.62.6] INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [3.14.102.175] INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [18.217.96.12] INFO[0016] [remove/rke-cp-port-listener] Successfully removed container on host [18.217.96.12] INFO[0016] [remove/rke-cp-port-listener] Successfully removed container on host [3.14.102.175] INFO[0016] [remove/rke-cp-port-listener] Successfully removed container on host [18.220.62.6] INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [18.220.70.253] INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [3.17.76.33] INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [18.188.214.137] INFO[0017] [network] Port listener containers removed successfully INFO[0017] [certificates] Deploying kubernetes certificates to Cluster nodes INFO[0022] [reconcile] Rebuilding and updating local kube config INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] INFO[0022] [certificates] Successfully deployed kubernetes certificates to Cluster nodes INFO[0022] [reconcile] Reconciling cluster state INFO[0022] [reconcile] This is newly generated cluster INFO[0022] Pre-pulling kubernetes images INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [3.14.102.175] INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.188.214.137] INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.70.253] INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.217.96.12] INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [3.17.76.33] INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.62.6] INFO[0038] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.62.6] INFO[0038] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.70.253] INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [3.17.76.33] INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.217.96.12] INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.188.214.137] INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [3.14.102.175] INFO[0039] Kubernetes images pulled successfully INFO[0039] [etcd] Building up etcd plane.. INFO[0039] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24-rancher1] on host [3.14.102.175] INFO[0041] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24-rancher1] on host [3.14.102.175] INFO[0051] [etcd] Successfully started [etcd] container on host [3.14.102.175] INFO[0051] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [3.14.102.175] INFO[0052] [etcd] Successfully started [etcd-rolling-snapshots] container on host [3.14.102.175] INFO[0057] [certificates] Successfully started [rke-bundle-cert] container on host [3.14.102.175] INFO[0058] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [3.14.102.175] INFO[0058] [etcd] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0058] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0058] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.220.62.6] INFO[0063] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.220.62.6] INFO[0069] [etcd] Successfully started [etcd] container on host [18.220.62.6] INFO[0069] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [18.220.62.6] INFO[0069] [etcd] Successfully started [etcd-rolling-snapshots] container on host [18.220.62.6] INFO[0075] [certificates] Successfully started [rke-bundle-cert] container on host [18.220.62.6] INFO[0075] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [18.220.62.6] INFO[0076] [etcd] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0076] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0076] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.217.96.12] INFO[0078] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.217.96.12] INFO[0078] [etcd] Successfully started [etcd] container on host [18.217.96.12] INFO[0078] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [18.217.96.12] INFO[0078] [etcd] Successfully started [etcd-rolling-snapshots] container on host [18.217.96.12] INFO[0084] [certificates] Successfully started [rke-bundle-cert] container on host [18.217.96.12] INFO[0084] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [18.217.96.12] INFO[0085] [etcd] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0085] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0085] [etcd] Successfully started etcd plane.. Checking etcd cluster health INFO[0086] [controlplane] Building up Controller Plane.. INFO[0086] [controlplane] Successfully started [kube-apiserver] container on host [18.220.62.6] INFO[0086] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [18.220.62.6] INFO[0086] [controlplane] Successfully started [kube-apiserver] container on host [3.14.102.175] INFO[0086] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [3.14.102.175] INFO[0086] [controlplane] Successfully started [kube-apiserver] container on host [18.217.96.12] INFO[0086] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [18.217.96.12] INFO[0098] [healthcheck] service [kube-apiserver] on host [18.220.62.6] is healthy INFO[0099] [healthcheck] service [kube-apiserver] on host [18.217.96.12] is healthy INFO[0099] [controlplane] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0099] [controlplane] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0099] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0099] [healthcheck] service [kube-apiserver] on host [3.14.102.175] is healthy INFO[0099] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0099] [controlplane] Successfully started [kube-controller-manager] container on host [18.220.62.6] INFO[0099] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [18.220.62.6] INFO[0100] [controlplane] Successfully started [kube-controller-manager] container on host [18.217.96.12] INFO[0100] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [18.217.96.12] INFO[0100] [controlplane] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0100] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0100] [healthcheck] service [kube-controller-manager] on host [18.220.62.6] is healthy INFO[0100] [healthcheck] service [kube-controller-manager] on host [18.217.96.12] is healthy INFO[0100] [controlplane] Successfully started [kube-controller-manager] container on host [3.14.102.175] INFO[0100] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [3.14.102.175] INFO[0101] [controlplane] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0101] [controlplane] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0101] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0101] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0101] [controlplane] Successfully started [kube-scheduler] container on host [18.220.62.6] INFO[0101] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [18.220.62.6] INFO[0101] [healthcheck] service [kube-controller-manager] on host [3.14.102.175] is healthy INFO[0101] [controlplane] Successfully started [kube-scheduler] container on host [18.217.96.12] INFO[0101] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [18.217.96.12] INFO[0102] [controlplane] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0102] [healthcheck] service [kube-scheduler] on host [18.220.62.6] is healthy INFO[0102] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0102] [healthcheck] service [kube-scheduler] on host [18.217.96.12] is healthy INFO[0103] [controlplane] Successfully started [kube-scheduler] container on host [3.14.102.175] INFO[0103] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [3.14.102.175] INFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0103] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0103] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0103] [healthcheck] service [kube-scheduler] on host [3.14.102.175] is healthy INFO[0104] [controlplane] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0104] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0104] [controlplane] Successfully started Controller Plane.. INFO[0104] [authz] Creating rke-job-deployer ServiceAccount INFO[0104] [authz] rke-job-deployer ServiceAccount created successfully INFO[0104] [authz] Creating system:node ClusterRoleBinding INFO[0104] [authz] system:node ClusterRoleBinding created successfully INFO[0104] Successfully Deployed state file at [./cluster.rkestate] INFO[0104] [state] Saving full cluster state to Kubernetes INFO[0104] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: cluster-state INFO[0104] [worker] Building up Worker Plane.. INFO[0104] [sidekick] Sidekick container already created on host [18.220.62.6] INFO[0104] [sidekick] Sidekick container already created on host [3.14.102.175] INFO[0104] [sidekick] Sidekick container already created on host [18.217.96.12] INFO[0105] [worker] Successfully started [kubelet] container on host [3.14.102.175] INFO[0105] [healthcheck] Start Healthcheck on service [kubelet] on host [3.14.102.175] INFO[0105] [worker] Successfully started [kubelet] container on host [18.220.62.6] INFO[0105] [healthcheck] Start Healthcheck on service [kubelet] on host [18.220.62.6] INFO[0105] [worker] Successfully started [kubelet] container on host [18.217.96.12] INFO[0105] [healthcheck] Start Healthcheck on service [kubelet] on host [18.217.96.12] INFO[0105] [worker] Successfully started [nginx-proxy] container on host [18.220.70.253] INFO[0105] [worker] Successfully started [nginx-proxy] container on host [3.17.76.33] INFO[0105] [worker] Successfully started [nginx-proxy] container on host [18.188.214.137] INFO[0106] [worker] Successfully started [rke-log-linker] container on host [18.220.70.253] INFO[0106] [worker] Successfully started [rke-log-linker] container on host [3.17.76.33] INFO[0106] [worker] Successfully started [rke-log-linker] container on host [18.188.214.137] INFO[0106] [remove/rke-log-linker] Successfully removed container on host [3.17.76.33] INFO[0106] [remove/rke-log-linker] Successfully removed container on host [18.220.70.253] INFO[0106] [remove/rke-log-linker] Successfully removed container on host [18.188.214.137] INFO[0106] [worker] Successfully started [kubelet] container on host [3.17.76.33] INFO[0106] [healthcheck] Start Healthcheck on service [kubelet] on host [3.17.76.33] INFO[0107] [worker] Successfully started [kubelet] container on host [18.220.70.253] INFO[0107] [healthcheck] Start Healthcheck on service [kubelet] on host [18.220.70.253] INFO[0107] [worker] Successfully started [kubelet] container on host [18.188.214.137] INFO[0107] [healthcheck] Start Healthcheck on service [kubelet] on host [18.188.214.137] INFO[0111] [healthcheck] service [kubelet] on host [18.220.62.6] is healthy INFO[0111] [healthcheck] service [kubelet] on host [18.217.96.12] is healthy INFO[0111] [healthcheck] service [kubelet] on host [3.14.102.175] is healthy INFO[0111] [worker] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0111] [worker] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0111] [worker] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0112] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0112] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0112] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0112] [worker] Successfully started [kube-proxy] container on host [18.220.62.6] INFO[0112] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.220.62.6] INFO[0112] [worker] Successfully started [kube-proxy] container on host [18.217.96.12] INFO[0112] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.217.96.12] INFO[0112] [worker] Successfully started [kube-proxy] container on host [3.14.102.175] INFO[0112] [healthcheck] Start Healthcheck on service [kube-proxy] on host [3.14.102.175] INFO[0113] [healthcheck] service [kube-proxy] on host [18.220.62.6] is healthy INFO[0113] [healthcheck] service [kube-proxy] on host [18.217.96.12] is healthy INFO[0113] [healthcheck] service [kube-proxy] on host [3.14.102.175] is healthy INFO[0113] [healthcheck] service [kubelet] on host [18.220.70.253] is healthy INFO[0113] [healthcheck] service [kubelet] on host [3.17.76.33] is healthy INFO[0113] [healthcheck] service [kubelet] on host [18.188.214.137] is healthy INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0113] [worker] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0113] [worker] Successfully started [rke-log-linker] container on host [3.17.76.33] INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.220.70.253] INFO[0113] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.188.214.137] INFO[0113] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0113] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0114] [remove/rke-log-linker] Successfully removed container on host [3.17.76.33] INFO[0114] [remove/rke-log-linker] Successfully removed container on host [18.220.70.253] INFO[0114] [remove/rke-log-linker] Successfully removed container on host [18.188.214.137] INFO[0114] [worker] Successfully started [kube-proxy] container on host [3.17.76.33] INFO[0114] [healthcheck] Start Healthcheck on service [kube-proxy] on host [3.17.76.33] INFO[0114] [worker] Successfully started [kube-proxy] container on host [18.220.70.253] INFO[0114] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.220.70.253] INFO[0114] [worker] Successfully started [kube-proxy] container on host [18.188.214.137] INFO[0114] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.188.214.137] INFO[0114] [healthcheck] service [kube-proxy] on host [18.220.70.253] is healthy INFO[0114] [healthcheck] service [kube-proxy] on host [3.17.76.33] is healthy INFO[0115] [healthcheck] service [kube-proxy] on host [18.188.214.137] is healthy INFO[0115] [worker] Successfully started [rke-log-linker] container on host [18.220.70.253] INFO[0115] [worker] Successfully started [rke-log-linker] container on host [3.17.76.33] INFO[0115] [worker] Successfully started [rke-log-linker] container on host [18.188.214.137] INFO[0115] [remove/rke-log-linker] Successfully removed container on host [18.220.70.253] INFO[0115] [remove/rke-log-linker] Successfully removed container on host [3.17.76.33] INFO[0115] [remove/rke-log-linker] Successfully removed container on host [18.188.214.137] INFO[0115] [worker] Successfully started Worker Plane.. INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.188.214.137] INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [3.17.76.33] INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.220.70.253] INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.220.62.6] INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.217.96.12] INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [3.14.102.175] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [3.17.76.33] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.188.214.137] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.220.62.6] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.220.70.253] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.217.96.12] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [3.14.102.175] INFO[0116] [sync] Syncing nodes Labels and Taints INFO[0117] [sync] Successfully synced nodes Labels and Taints INFO[0117] [network] Setting up network plugin: canal INFO[0117] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes INFO[0117] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes INFO[0117] [addons] Executing deploy job rke-network-plugin INFO[0122] [addons] Setting up kube-dns INFO[0122] [addons] Saving ConfigMap for addon rke-kube-dns-addon to Kubernetes INFO[0122] [addons] Successfully saved ConfigMap for addon rke-kube-dns-addon to Kubernetes INFO[0122] [addons] Executing deploy job rke-kube-dns-addon INFO[0127] [addons] kube-dns deployed successfully INFO[0127] [dns] DNS provider kube-dns deployed successfully INFO[0127] [addons] Setting up Metrics Server INFO[0127] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0127] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0127] [addons] Executing deploy job rke-metrics-addon INFO[0132] [addons] Metrics Server deployed successfully INFO[0132] [ingress] Setting up nginx ingress controller INFO[0132] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0132] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0132] [addons] Executing deploy job rke-ingress-controller INFO[0137] [ingress] ingress controller nginx deployed successfully INFO[0137] [addons] Setting up user addons INFO[0137] [addons] no user addons defined INFO[0137] Finished building Kubernetes cluster successfully # finish kubectl install sudo curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.5/bin/linux/amd64/kubectl sudo chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl sudo mkdir ~/.kube # finish helm #https://github.com/helm/helm/releases # there is no helm 2.12.5 - last is 2.12.3 - trying 2.13.1 wget http://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz sudo tar -zxvf helm-v2.13.1-linux-amd64.tar.gz sudo cp kube_config_cluster.yml ~/.kube/config sudo chmod 777 ~/.kube/config # test ubuntu@ip-172-31-38-182:~$ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx default-http-backend-78fccfc5d9-f6z25 1/1 Running 0 17m 10.42.5.2 18.220.70.253 <none> <none> ingress-nginx nginx-ingress-controller-2zxs7 1/1 Running 0 17m 18.188.214.137 18.188.214.137 <none> <none> ingress-nginx nginx-ingress-controller-6b7gs 1/1 Running 0 17m 3.17.76.33 3.17.76.33 <none> <none> ingress-nginx nginx-ingress-controller-nv4qg 1/1 Running 0 17m 18.220.70.253 18.220.70.253 <none> <none> kube-system canal-48579 2/2 Running 0 17m 18.220.62.6 18.220.62.6 <none> <none> kube-system canal-6skkm 2/2 Running 0 17m 18.188.214.137 18.188.214.137 <none> <none> kube-system canal-9xmxv 2/2 Running 0 17m 18.217.96.12 18.217.96.12 <none> <none> kube-system canal-c582x 2/2 Running 0 17m 18.220.70.253 18.220.70.253 <none> <none> kube-system canal-whbck 2/2 Running 0 17m 3.14.102.175 3.14.102.175 <none> <none> kube-system canal-xbbnh 2/2 Running 0 17m 3.17.76.33 3.17.76.33 <none> <none> kube-system kube-dns-58bd5b8dd7-6mcm7 3/3 Running 0 17m 10.42.3.3 3.17.76.33 <none> <none> kube-system kube-dns-58bd5b8dd7-cd5dg 3/3 Running 0 17m 10.42.4.2 18.188.214.137 <none> <none> kube-system kube-dns-autoscaler-77bc5fd84-p4zfd 1/1 Running 0 17m 10.42.3.2 3.17.76.33 <none> <none> kube-system metrics-server-58bd5dd8d7-kftjn 1/1 Running 0 17m 10.42.3.4 3.17.76.33 <none> <none> # install tiller ubuntu@ip-172-31-38-182:~$ kubectl -n kube-system create serviceaccount tiller serviceaccount/tiller created ubuntu@ip-172-31-38-182:~$ kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller clusterrolebinding.rbac.authorization.k8s.io/tiller created ubuntu@ip-172-31-38-182:~$ helm init --service-account tiller Creating /home/ubuntu/.helm Creating /home/ubuntu/.helm/repository Creating /home/ubuntu/.helm/repository/cache Creating /home/ubuntu/.helm/repository/local Creating /home/ubuntu/.helm/plugins Creating /home/ubuntu/.helm/starters Creating /home/ubuntu/.helm/cache/archive Creating /home/ubuntu/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /home/ubuntu/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming! ubuntu@ip-172-31-38-182:~$ kubectl -n kube-system rollout status deploy/tiller-deploy deployment "tiller-deploy" successfully rolled out ubuntu@ip-172-31-38-182:~$ sudo helm init --upgrade $HELM_HOME has been configured at /home/ubuntu/.helm. Tiller (the Helm server-side component) has been upgraded to the current version. Happy Helming! ubuntu@ip-172-31-38-182:~$ sudo helm version Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} ubuntu@ip-172-31-38-182:~$ sudo helm serve & [1] 706 ubuntu@ip-172-31-38-182:~$ Regenerating index. This may take a moment. Now serving you on 127.0.0.1:8879 ubuntu@ip-172-31-38-182:~$ sudo helm list ubuntu@ip-172-31-38-182:~$ sudo helm repo add local http://127.0.0.1:8879 "local" has been added to your repositories ubuntu@ip-172-31-38-182:~$ sudo helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879 ubuntu@ip-172-31-38-182:~$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 18.188.214.137 Ready worker 22m v1.13.5 18.188.214.137 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 18.217.96.12 Ready controlplane,etcd 22m v1.13.5 18.217.96.12 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 18.220.62.6 Ready controlplane,etcd 22m v1.13.5 18.220.62.6 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 18.220.70.253 Ready worker 22m v1.13.5 18.220.70.253 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 3.14.102.175 Ready controlplane,etcd 22m v1.13.5 3.14.102.175 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 3.17.76.33 Ready worker 22m v1.13.5 3.17.76.33 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 # install make sudo apt-get install make -y # install nfs/efs ubuntu@ip-172-31-38-182:~$ sudo apt-get install nfs-common -y ubuntu@ip-172-31-38-182:~$ sudo mkdir /dockerdata-nfs ubuntu@ip-172-31-38-182:~$ sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-5fd6ab26.efs.us-east-2.amazonaws.com:/ /dockerdata-nfs # check sudo nohup ./cd.sh -b master -e onap -p false -n nexus3.onap.org:10001 -f false -s 600 -c false -d false -w false -r false & ubuntu@ip-172-31-38-182:~$ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx default-http-backend-78fccfc5d9-f6z25 1/1 Running 0 103m 10.42.5.2 18.220.70.253 <none> <none> ingress-nginx nginx-ingress-controller-2zxs7 1/1 Running 0 103m 18.188.214.137 18.188.214.137 <none> <none> ingress-nginx nginx-ingress-controller-6b7gs 1/1 Running 0 103m 3.17.76.33 3.17.76.33 <none> <none> ingress-nginx nginx-ingress-controller-nv4qg 1/1 Running 0 103m 18.220.70.253 18.220.70.253 <none> <none> kube-system canal-48579 2/2 Running 0 103m 18.220.62.6 18.220.62.6 <none> <none> kube-system canal-6skkm 2/2 Running 0 103m 18.188.214.137 18.188.214.137 <none> <none> kube-system canal-9xmxv 2/2 Running 0 103m 18.217.96.12 18.217.96.12 <none> <none> kube-system canal-c582x 2/2 Running 0 103m 18.220.70.253 18.220.70.253 <none> <none> kube-system canal-whbck 2/2 Running 0 103m 3.14.102.175 3.14.102.175 <none> <none> kube-system canal-xbbnh 2/2 Running 0 103m 3.17.76.33 3.17.76.33 <none> <none> kube-system kube-dns-58bd5b8dd7-6mcm7 3/3 Running 0 103m 10.42.3.3 3.17.76.33 <none> <none> kube-system kube-dns-58bd5b8dd7-cd5dg 3/3 Running 0 103m 10.42.4.2 18.188.214.137 <none> <none> kube-system kube-dns-autoscaler-77bc5fd84-p4zfd 1/1 Running 0 103m 10.42.3.2 3.17.76.33 <none> <none> kube-system metrics-server-58bd5dd8d7-kftjn 1/1 Running 0 103m 10.42.3.4 3.17.76.33 <none> <none> kube-system tiller-deploy-5f4fc5bcc6-gc4tc 1/1 Running 0 84m 10.42.5.3 18.220.70.253 <none> <none> onap onap-aai-aai-587cb79c6d-mzpbs 0/1 Init:0/1 1 13m 10.42.5.17 18.220.70.253 <none> <none> onap onap-aai-aai-babel-8c755bcfc-kmzdm 2/2 Running 0 29m 10.42.5.8 18.220.70.253 <none> <none> onap onap-aai-aai-champ-78b9d7f68b-98tm9 0/2 Init:0/1 2 29m 10.42.3.6 3.17.76.33 <none> <none> onap onap-aai-aai-data-router-64fcfbc5bb-wkkvz 1/2 CrashLoopBackOff 9 29m 10.42.5.7 18.220.70.253 <none> <none> onap onap-aai-aai-elasticsearch-6dcf5d9966-j7z67 1/1 Running 0 29m 10.42.4.4 18.188.214.137 <none> <none> onap onap-aai-aai-gizmo-5bddb87589-zn8pl 2/2 Running 0 29m 10.42.4.3 18.188.214.137 <none> <none> onap onap-aai-aai-graphadmin-774f9d698f-f8lwv 0/2 Init:0/1 2 29m 10.42.5.4 18.220.70.253 <none> <none> onap onap-aai-aai-graphadmin-create-db-schema-94q4l 0/1 Init:Error 0 18m 10.42.4.16 18.188.214.137 <none> <none> onap onap-aai-aai-graphadmin-create-db-schema-s42pq 0/1 Init:0/1 0 7m54s 10.42.5.20 18.220.70.253 <none> <none> onap onap-aai-aai-graphadmin-create-db-schema-tsvcw 0/1 Init:Error 0 29m 10.42.4.5 18.188.214.137 <none> <none> onap onap-aai-aai-modelloader-845fc684bd-r7mdw 2/2 Running 0 29m 10.42.4.6 18.188.214.137 <none> <none> onap onap-aai-aai-resources-67f8dfcbdb-kz6cp 0/2 Init:0/1 2 29m 10.42.5.11 18.220.70.253 <none> <none> onap onap-aai-aai-schema-service-6c56b45b7c-7zlfz 2/2 Running 0 29m 10.42.3.7 3.17.76.33 <none> <none> onap onap-aai-aai-search-data-5d8d7759b8-flxwj 2/2 Running 0 29m 10.42.3.9 3.17.76.33 <none> <none> onap onap-aai-aai-sparky-be-8444df749c-mzc2n 0/2 Init:0/1 0 29m 10.42.4.11 18.188.214.137 <none> <none> onap onap-aai-aai-spike-54ff77787f-d678x 2/2 Running 0 29m 10.42.5.6 18.220.70.253 <none> <none> onap onap-aai-aai-traversal-6ff868f477-lzv2f 0/2 Init:0/1 2 29m 10.42.3.8 3.17.76.33 <none> <none> onap onap-aai-aai-traversal-update-query-data-9g2b8 0/1 Init:0/1 2 29m 10.42.5.12 18.220.70.253 <none> <none> onap onap-dmaap-dbc-pg-0 1/1 Running 0 29m 10.42.5.9 18.220.70.253 <none> <none> onap onap-dmaap-dbc-pg-1 1/1 Running 0 26m 10.42.3.14 3.17.76.33 <none> <none> onap onap-dmaap-dbc-pgpool-8666b57857-97zjc 1/1 Running 0 29m 10.42.5.5 18.220.70.253 <none> <none> onap onap-dmaap-dbc-pgpool-8666b57857-vr8gk 1/1 Running 0 29m 10.42.4.8 18.188.214.137 <none> <none> onap onap-dmaap-dmaap-bc-745995bf74-m6hhq 0/1 Init:0/2 2 29m 10.42.4.12 18.188.214.137 <none> <none> onap onap-dmaap-dmaap-bc-post-install-6ff4j 1/1 Running 0 29m 10.42.4.9 18.188.214.137 <none> <none> onap onap-dmaap-dmaap-dr-db-0 1/1 Running 0 29m 10.42.4.10 18.188.214.137 <none> <none> onap onap-dmaap-dmaap-dr-db-1 1/1 Running 1 24m 10.42.5.15 18.220.70.253 <none> <none> onap onap-dmaap-dmaap-dr-node-0 2/2 Running 0 29m 10.42.3.11 3.17.76.33 <none> <none> onap onap-dmaap-dmaap-dr-prov-fbf6c94f5-v9bmq 2/2 Running 0 29m 10.42.5.10 18.220.70.253 <none> <none> onap onap-dmaap-message-router-0 1/1 Running 0 29m 10.42.4.14 18.188.214.137 <none> <none> onap onap-dmaap-message-router-kafka-0 1/1 Running 1 29m 10.42.5.13 18.220.70.253 <none> <none> onap onap-dmaap-message-router-kafka-1 1/1 Running 1 29m 10.42.3.13 3.17.76.33 <none> <none> onap onap-dmaap-message-router-kafka-2 1/1 Running 0 29m 10.42.4.15 18.188.214.137 <none> <none> onap onap-dmaap-message-router-mirrormaker-8587c4c9cf-lfnd8 0/1 CrashLoopBackOff 9 29m 10.42.4.7 18.188.214.137 <none> <none> onap onap-dmaap-message-router-zookeeper-0 1/1 Running 0 29m 10.42.5.14 18.220.70.253 <none> <none> onap onap-dmaap-message-router-zookeeper-1 1/1 Running 0 29m 10.42.4.13 18.188.214.137 <none> <none> onap onap-dmaap-message-router-zookeeper-2 1/1 Running 0 29m 10.42.3.12 3.17.76.33 <none> <none> onap onap-nfs-provisioner-nfs-provisioner-57c999dc57-mdcw5 1/1 Running 0 24m 10.42.3.15 3.17.76.33 <none> <none> onap onap-robot-robot-677bdbb454-zj9jk 1/1 Running 0 24m 10.42.5.16 18.220.70.253 <none> <none> onap onap-so-so-8569947cbd-jn5x4 0/1 Init:0/1 1 13m 10.42.4.19 18.188.214.137 <none> <none> onap onap-so-so-bpmn-infra-78c8fd665d-b47qn 0/1 Init:0/1 1 13m 10.42.3.16 3.17.76.33 <none> <none> onap onap-so-so-catalog-db-adapter-565f9767ff-lvbgx 0/1 Init:0/1 1 13m 10.42.3.17 3.17.76.33 <none> <none> onap onap-so-so-mariadb-config-job-d9sdb 0/1 Init:0/2 0 3m37s 10.42.3.20 3.17.76.33 <none> <none> onap onap-so-so-mariadb-config-job-rkqdl 0/1 Init:Error 0 13m 10.42.5.19 18.220.70.253 <none> <none> onap onap-so-so-monitoring-69b9fdd94c-dks4v 1/1 Running 0 13m 10.42.4.18 18.188.214.137 <none> <none> onap onap-so-so-openstack-adapter-5f9cf896d7-mgbdd 0/1 Init:0/1 1 13m 10.42.4.17 18.188.214.137 <none> <none> onap onap-so-so-request-db-adapter-5c9bfd7b57-2krnp 0/1 Init:0/1 1 13m 10.42.3.18 3.17.76.33 <none> <none> onap onap-so-so-sdc-controller-6fb5cf5775-bsxhm 0/1 Init:0/1 1 13m 10.42.4.20 18.188.214.137 <none> <none> onap onap-so-so-sdnc-adapter-8555689c75-r6vkb 1/1 Running 0 13m 10.42.5.18 18.220.70.253 <none> <none> onap onap-so-so-vfc-adapter-68fccc8bb8-c56t2 0/1 Init:0/1 1 13m 10.42.4.21 18.188.214.137 <none> <none> onap onap-so-so-vnfm-adapter-65c4c5944b-72nlf 1/1 Running 0 13m 10.42.3.19 3.17.76.33 <none> <none> # on worker nodes only # nfs client |
DI 20190507: ARM support using RKE 0.2.1 ARM friendly install
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
a1.4xlarge $0.408
ami-0b9bd0b532ebcf4c9
Notes
Pre-RKE installation details in Cloud Native Deployment
...