Table of Contents |
---|
Tracking
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
NON-HA version in https://git.onap.org/oom/tree/kubernetes/contrib/tools/rke/rke_setup.sh
https://gerrit.onap.org/r/#/c/79067/
Move to https://onap.readthedocs.io/en/beijing/submodules/oom.git/docs/oom_cloud_setup_guide.html or similar when this documentation is released
Automated Installation Video of RKE install
20190227 - VMware
View file | ||||
---|---|---|---|---|
|
Versions
Currently Docker 18.06, RKE 0.16, Kubernetes 1.11.6, Kubectl 1.11.6, Helm 2.9.1
TODO: verify later versions of helm and a way to get RKE to install Kubernetes 1.13
Prerequisites
20190330 - AWS
View file | ||||
---|---|---|---|---|
|
Quickstart
Get your public and private keys on the Ubuntu 16.04 VM
...
Determine RKE and Docker versions
Don't just use the latest docker version - check the RKE release page to get the version pair - 0.15/17.03 and 0.16/18.06 - see https://github.com/docker/docker-ce/releases - currently https://github.com/docker/docker-ce/releases/tag/v18.06.3-ce
...
theme | Midnight |
---|
...
.
Adjust authorized_keys with your public key if not already - aws has it, openstack may not
get rke script from jira, gerrit or by cloning OOM when the review is done.
Code Block | ||
---|---|---|
| ||
# on your laptop/where your cert is
# chmod 777 your cert before you scp it over
obrienbiometrics:full michaelobrien$ scp ~/wse_onap/onap_rsa ubuntu@rke0.onap.info:~/
# on the host
sudo cp onap_rsa ~/.ssh
sudo chmod 400 ~/.ssh/onap_rsa
sudo chown ubuntu:ubuntu ~/.ssh/onap_rsa
# just verify
sudo vi ~/.ssh/authorized_keys
git clone --recurse-submodules https://gerrit.onap.org/r/oom
sudo cp oom/kubernetes/contrib/tools/rke/rke_setup.sh .
sudo nohup ./rke_setup.sh -b master -s 104.209.161.210 -e onap -k onap_rsa -l ubuntu &
ubuntu@a-rke0-master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx default-http-backend-797c5bc547-55fpn 1/1 Running 0 4m
ingress-nginx nginx-ingress-controller-znhgz 1/1 Running 0 4m
kube-system canal-dqt2m 3/3 Running 0 5m
kube-system kube-dns-7588d5b5f5-pzdfh 3/3 Running 0 5m
kube-system kube-dns-autoscaler-5db9bbb766-b7vvg 1/1 Running 0 5m
kube-system metrics-server-97bc649d5-fmqjd 1/1 Running 0 4m
kube-system rke-ingress-controller-deploy-job-dxmbd 0/1 Completed 0 4m
kube-system rke-kubedns-addon-deploy-job-wqccp 0/1 Completed 0 5m
kube-system rke-metrics-addon-deploy-job-ssrgp 0/1 Completed 0 4m
kube-system rke-network-plugin-deploy-job-jkffq 0/1 Completed 0 5m
kube-system tiller-deploy-759cb9df9-rlt7v 1/1 Running 0 2m
ubuntu@a-rke0-master:~$ helm list |
Versions
Currently Docker 18.06, RKE 0.1.16, Kubernetes 1.11.6, Kubectl 1.11.6, Helm 2.12.3
TODO: verify later versions of helm and a way to get RKE to install Kubernetes 1.13
Prerequisites
Ubuntu 16.04 VM
Determine RKE and Docker versions
Don't just use the latest docker version - check the RKE release page to get the version pair - 0.1.15/17.03 and 0.1.16/18.06 - see https://github.com/docker/docker-ce/releases - currently https://github.com/docker/docker-ce/releases/tag/v18.06.3-ce
Code Block | ||
---|---|---|
| ||
ubuntu@a-rke:~$ sudo curl https://releases.rancher.com/install-docker/18.06.sh | sh
ubuntu@a-rke:~$ sudo usermod -aG docker ubuntu
ubuntu@a-rke:~$ sudo docker version
Client:
Version: 18.06.3-ce
API version: 1.38
Go version: go1.10.3
Git commit: d7080c1
Built: Wed Feb 20 02:27:18 2019
# install RKE
sudo wget https://github.com/rancher/rke/releases/download/v0.1.16/rke_linux-amd64
mv rke_linux-amd64 rke
sudo mv ./rke /usr/local/bin/rke
ubuntu@a-rke:~$ rke --version
rke version v0.1.16 |
Private SSH key
scp your public key to the box - ideally to ~/.ssh and chmod 400 it - make sure you add your key to authorized_keys
Elastic Reserved IP
get a VIP or EIP and assign this to your VM
generate cluster.yml - optional
cluster.yml will generated by the script rke_setup.sh
Code Block | ||
---|---|---|
| ||
azure config - no need to hand build the yml
Watch the path of your 2 keys
Also don't add an "addon" until you have one of the config job will fail
{noformat}
ubuntu@a-rke:~$ rke config --name cluster.yml
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: ~/.ssh/onap_rsa
[+] Number of Hosts [1]:
[+] SSH Address of host (1) [none]: rke.onap.cloud
[+] SSH Port of host (1) [22]:
[+] SSH Private Key Path of host (rke.onap.cloud) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (rke.onap.cloud) [ubuntu]:
[+] Is host (rke.onap.cloud) a Control Plane host (y/n)? [y]: y
[+] Is host (rke.onap.cloud) a Worker host (y/n)? [n]: y
[+] Is host (rke.onap.cloud) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (rke.onap.cloud) [none]:
[+] Internal IP of host (rke.onap.cloud) [none]:
[+] Docker socket path on host (rke.onap.cloud) [/var/run/docker.sock]:
[+] Network Plugin Type (flannel, calico, weave, canal) [canal]:
[+] Authentication Strategy [x509]:
[+] Authorization Mode (rbac, none) [rbac]:
[+] Kubernetes Docker image [rancher/hyperkube:v1.11.6-rancher1]:
[+] Cluster domain [cluster.local]:
[+] Service Cluster IP Range [10.43.0.0/16]:
[+] Enable PodSecurityPolicy [n]:
[+] Cluster Network CIDR [10.42.0.0/16]:
[+] Cluster DNS Service IP [10.43.0.10]:
[+] Add addon manifest URLs or YAML files [no]: no
ubuntu@a-rke:~$ sudo cat cluster.yml
# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: rke.onap.cloud
port: "22"
internal_address: ""
role:
- controlplane
- worker
- etcd
hostname_override: ""
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/onap_rsa
labels: {}
services:
etcd:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
external_urls: []
ca_cert: ""
cert: ""
key: ""
path: ""
snapshot: null
retention: ""
creation: ""
kube-api:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: ""
pod_security_policy: false
kube-controller:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
scheduler:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
kubelet:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_domain: cluster.local
infra_container_image: ""
cluster_dns_server: 10.43.0.10
fail_swap_on: false
kubeproxy:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
network:
plugin: canal
options: {}
authentication:
strategy: x509
options: {}
sans: []
system_images:
etcd: rancher/coreos-etcd:v3.2.18
alpine: rancher/rke-tools:v0.1.15
nginx_proxy: rancher/rke-tools:v0.1.15
cert_downloader: rancher/rke-tools:v0.1.15
kubernetes_services_sidecar: rancher/rke-tools:v0.1.15
kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.10
dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10
kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.10
kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0
kubernetes: rancher/hyperkube:v1.11.6-rancher1
flannel: rancher/coreos-flannel:v0.10.0
flannel_cni: rancher/coreos-flannel-cni:v0.3.0
calico_node: rancher/calico-node:v3.1.3
calico_cni: rancher/calico-cni:v3.1.3
calico_controllers: ""
calico_ctl: rancher/calico-ctl:v2.0.0
canal_node: rancher/calico-node:v3.1.3
canal_cni: rancher/calico-cni:v3.1.3
canal_flannel: rancher/coreos-flannel:v0.10.0
wave_node: weaveworks/weave-kube:2.1.2
weave_cni: weaveworks/weave-npc:2.1.2
pod_infra_container: rancher/pause-amd64:3.1
ingress: rancher/nginx-ingress-controller:0.16.2-rancher1
ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.4
metrics_server: rancher/metrics-server-amd64:v0.2.1
ssh_key_path: ~/.ssh/onap_rsa
ssh_agent_auth: false
authorization:
mode: rbac
options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
provider: ""
options: {}
node_selector: {}
extra_args: {}
cluster_name: ""
cloud_provider:
name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
address: ""
port: ""
user: ""
ssh_key: ""
ssh_key_path: ""
monitoring:
provider: ""
options: {}
{noformat}
|
Kubernetes Single Node Developer Installation
Code Block | ||
---|---|---|
| ||
sudo ./rke_install.sh -b master -s localhost -e onap -l ubuntu |
Kubernetes HA Cluster Production Installation
Design Issues
DI 20190225-1: RKE/Docker version pair
As of 20190215 RKE 0.16 supports Docker 18.06-ce (and 18.09 non-ce) (up from 0.15 supporting 17.03)
https://github.com/docker/docker-ce/releases/tag/v18.06.3-ce
https://github.com/rancher/rke/releases/tag/v0.1.16
Code Block | ||
---|---|---|
| ||
ubuntu@a-rke:~$ sudo rke up
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [rke.onap.cloud]
FATA[0000] Unsupported Docker version found [18.06.3-ce], supported versions are [1.11.x 1.12.x 1.13.x 17.03.x] |
DI 20190225-2: RKE upgrade from 0.15 to 0.16 - not working
Does rke remove, regenerate the yaml (or hand upgrade the versions) then rke up
Code Block | ||
---|---|---|
| ||
ubuntu@a-rke:~$ sudo rke remove
Are you sure you want to remove Kubernetes cluster [y/n]: y
INFO[0002] Tearing down Kubernetes cluster
INFO[0002] [dialer] Setup tunnel for host [rke.onap.cloud]
INFO[0002] [worker] Tearing down Worker Plane..
INFO[0002] [remove/kubelet] Successfully removed container on host [rke.onap.cloud]
INFO[0003] [remove/kube-proxy] Successfully removed container on host [rke.onap.cloud]
INFO[0003] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud]
INFO[0003] [worker] Successfully tore down Worker Plane..
INFO[0003] [controlplane] Tearing down the Controller Plane..
INFO[0003] [remove/kube-apiserver] Successfully removed container on host [rke.onap.cloud]
INFO[0003] [remove/kube-controller-manager] Successfully removed container on host [rke.onap.cloud]
INFO[0004] [remove/kube-scheduler] Successfully removed container on host [rke.onap.cloud]
INFO[0004] [controlplane] Host [rke.onap.cloud] is already a worker host, skipping delete kubelet and kubeproxy.
INFO[0004] [controlplane] Successfully tore down Controller Plane..
INFO[0004] [etcd] Tearing down etcd plane..
INFO[0004] [remove/etcd] Successfully removed container on host [rke.onap.cloud]
INFO[0004] [etcd] Successfully tore down etcd plane..
INFO[0004] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0004] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0004] [hosts] Running cleaner container on host [rke.onap.cloud]
INFO[0005] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud]
INFO[0005] [hosts] Removing cleaner container on host [rke.onap.cloud]
INFO[0005] [hosts] Removing dead container logs on host [rke.onap.cloud]
INFO[0006] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud]
INFO[0006] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [hosts] Successfully cleaned up host [rke.onap.cloud]
INFO[0006] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0006] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0006] [hosts] Running cleaner container on host [rke.onap.cloud]
INFO[0007] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud]
INFO[0008] [hosts] Removing cleaner container on host [rke.onap.cloud]
INFO[0008] [hosts] Removing dead container logs on host [rke.onap.cloud]
INFO[0008] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud]
INFO[0009] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud]
INFO[0009] [hosts] Successfully cleaned up host [rke.onap.cloud]
INFO[0009] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0009] [hosts] Cleaning up host [rke.onap.cloud]
INFO[0009] [hosts] Running cleaner container on host [rke.onap.cloud]
INFO[0010] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud]
INFO[0010] [hosts] Removing cleaner container on host [rke.onap.cloud]
INFO[0010] [hosts] Removing dead container logs on host [rke.onap.cloud]
INFO[0011] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud]
INFO[0011] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud]
INFO[0011] [hosts] Successfully cleaned up host [rke.onap.cloud]
INFO[0011] Removing local admin Kubeconfig: ./kube_config_cluster.yml
INFO[0011] Local admin Kubeconfig removed successfully
INFO[0011] Cluster removed successfully
ubuntu@a-rke:~$ rke config --name cluster.ym
ubuntu@a-rke:~$ sudo rke up
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [rke.onap.cloud]
INFO[0000] [network] Deploying port listener containers
INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [rke.onap.cloud]
INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [rke.onap.cloud]
INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [rke.onap.cloud]
INFO[0002] [network] Port listener containers deployed successfully
INFO[0002] [network] Running control plane -> etcd port checks
INFO[0003] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud]
INFO[0003] [network] Running control plane -> worker port checks
INFO[0004] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud]
INFO[0004] [network] Running workers -> control plane port checks
INFO[0005] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud]
INFO[0005] [network] Checking KubeAPI port Control Plane hosts
INFO[0005] [network] Removing port listener containers
INFO[0005] [remove/rke-etcd-port-listener] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [remove/rke-cp-port-listener] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [remove/rke-worker-port-listener] Successfully removed container on host [rke.onap.cloud]
INFO[0006] [network] Port listener containers removed successfully
INFO[0006] [certificates] Attempting to recover certificates from backup on [etcd,controlPlane] hosts
INFO[0007] [certificates] No Certificate backup found on [etcd,controlPlane] hosts
INFO[0007] [certificates] Generating CA kubernetes certificates
INFO[0007] [certificates] Generating Kubernetes API server certficates
INFO[0008] [certificates] Generating Kube Controller certificates
INFO[0008] [certificates] Generating Kube Scheduler certificates
INFO[0008] [certificates] Generating Kube Proxy certificates
INFO[0009] [certificates] Generating Node certificate
INFO[0009] [certificates] Generating admin certificates and kubeconfig
INFO[0009] [certificates] Generating etcd-rke.onap.cloud certificate and key
INFO[0009] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates
INFO[0009] [certificates] Generating Kubernetes API server proxy client certificates
INFO[0010] [certificates] Temporarily saving certs to [etcd,controlPlane] hosts
INFO[0016] [certificates] Saved certs to [etcd,controlPlane] hosts
INFO[0016] [reconcile] Reconciling cluster state
INFO[0016] [reconcile] This is newly generated cluster
INFO[0016] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0022] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0022] Pre-pulling kubernetes images
INFO[0022] Kubernetes images pulled successfully
INFO[0022] [etcd] Building up etcd plane..
INFO[0023] [etcd] Successfully started [etcd] container on host [rke.onap.cloud]
INFO[0023] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [rke.onap.cloud]
INFO[0028] [certificates] Successfully started [rke-bundle-cert] container on host [rke.onap.cloud]
INFO[0029] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [rke.onap.cloud]
INFO[0029] [etcd] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0030] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0030] [etcd] Successfully started etcd plane..
INFO[0030] [controlplane] Building up Controller Plane..
INFO[0031] [controlplane] Successfully started [kube-apiserver] container on host [rke.onap.cloud]
INFO[0031] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [rke.onap.cloud]
INFO[0045] [healthcheck] service [kube-apiserver] on host [rke.onap.cloud] is healthy
INFO[0046] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0046] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0047] [controlplane] Successfully started [kube-controller-manager] container on host [rke.onap.cloud]
INFO[0047] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [rke.onap.cloud]
INFO[0052] [healthcheck] service [kube-controller-manager] on host [rke.onap.cloud] is healthy
INFO[0053] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0053] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0054] [controlplane] Successfully started [kube-scheduler] container on host [rke.onap.cloud]
INFO[0054] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [rke.onap.cloud]
INFO[0059] [healthcheck] service [kube-scheduler] on host [rke.onap.cloud] is healthy
INFO[0060] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0060] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0060] [controlplane] Successfully started Controller Plane..
INFO[0060] [authz] Creating rke-job-deployer ServiceAccount
INFO[0060] [authz] rke-job-deployer ServiceAccount created successfully
INFO[0060] [authz] Creating system:node ClusterRoleBinding
INFO[0060] [authz] system:node ClusterRoleBinding created successfully
INFO[0060] [certificates] Save kubernetes certificates as secrets
INFO[0060] [certificates] Successfully saved certificates as kubernetes secret [k8s-certs]
INFO[0060] [state] Saving cluster state to Kubernetes
INFO[0061] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state
INFO[0061] [state] Saving cluster state to cluster nodes
INFO[0061] [state] Successfully started [cluster-state-deployer] container on host [rke.onap.cloud]
INFO[0062] [remove/cluster-state-deployer] Successfully removed container on host [rke.onap.cloud]
INFO[0062] [worker] Building up Worker Plane..
INFO[0062] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud]
INFO[0063] [worker] Successfully started [kubelet] container on host [rke.onap.cloud]
INFO[0063] [healthcheck] Start Healthcheck on service [kubelet] on host [rke.onap.cloud]
INFO[0068] [healthcheck] service [kubelet] on host [rke.onap.cloud] is healthy
INFO[0069] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0070] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0070] [worker] Successfully started [kube-proxy] container on host [rke.onap.cloud]
INFO[0070] [healthcheck] Start Healthcheck on service [kube-proxy] on host [rke.onap.cloud]
INFO[0076] [healthcheck] service [kube-proxy] on host [rke.onap.cloud] is healthy
INFO[0076] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud]
INFO[0077] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud]
INFO[0077] [worker] Successfully started Worker Plane..
INFO[0077] [sync] Syncing nodes Labels and Taints
INFO[0077] [sync] Successfully synced nodes Labels and Taints
INFO[0077] [network] Setting up network plugin: canal
INFO[0077] [addons] Saving addon ConfigMap to Kubernetes
INFO[0077] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin
INFO[0077] [addons] Executing deploy job..
INFO[0082] [addons] Setting up KubeDNS
INFO[0082] [addons] Saving addon ConfigMap to Kubernetes
INFO[0082] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-kubedns-addon
INFO[0082] [addons] Executing deploy job..
INFO[0087] [addons] KubeDNS deployed successfully..
INFO[0087] [addons] Setting up Metrics Server
INFO[0087] [addons] Saving addon ConfigMap to Kubernetes
INFO[0087] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-metrics-addon
INFO[0087] [addons] Executing deploy job..
INFO[0092] [addons] KubeDNS deployed successfully..
INFO[0092] [ingress] Setting up nginx ingress controller
INFO[0092] [addons] Saving addon ConfigMap to Kubernetes
INFO[0092] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-ingress-controller
INFO[0092] [addons] Executing deploy job..
INFO[0097] [ingress] ingress controller nginx is successfully deployed
INFO[0097] [addons] Setting up user addons
INFO[0097] [addons] Checking for included user addons
WARN[0097] [addons] Unable to determine if is a file path or url, skipping
INFO[0097] [addons] Deploying rke-user-includes-addons
INFO[0097] [addons] Saving addon ConfigMap to Kubernetes
INFO[0097] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-user-includes-addons
INFO[0097] [addons] Executing deploy job..
WARN[0128] Failed to deploy addon execute job [rke-user-includes-addons]: Failed to get job complete status: <nil>
INFO[0128] Finished building Kubernetes cluster successfully
ubuntu@a-rke:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec26c4bd24b5 846921f0fe0e "/server" 10 minutes ago Up 10 minutes k8s_default-http-backend_default-http-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0
f8d5db205e14 8a7739f672b4 "/sidecar --v=2 --lo…" 10 minutes ago Up 10 minutes k8s_sidecar_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
490461545ae4 rancher/metrics-server-amd64 "/metrics-server --s…" 10 minutes ago Up 10 minutes k8s_metrics-server_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0
aaf03b62bd41 6816817d9dce "/dnsmasq-nanny -v=2…" 10 minutes ago Up 10 minutes k8s_dnsmasq_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
58ec007db72f 55ffe31ac578 "/kube-dns --domain=…" 10 minutes ago Up 10 minutes k8s_kubedns_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
0a95c06f6aa6 e183460c484d "/cluster-proportion…" 10 minutes ago Up 10 minutes k8s_autoscaler_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0
968a7c99b210 rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_default-http-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0
69969b331e49 rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0
baa5f03c16ff rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
82b2a9f640cb rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0
953a4d4be0c1 df4469c42185 "/usr/bin/dumb-init …" 10 minutes ago Up 10 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0
cce552840749 rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_nginx-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0
baa65f9c6f97 f0fad859c909 "/opt/bin/flanneld -…" 10 minutes ago Up 10 minutes k8s_kube-flannel_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
1736ce68f41a 9f355e076ea7 "/install-cni.sh" 10 minutes ago Up 10 minutes k8s_install-cni_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
615d3f702ee7 7eca10056c8e "start_runit" 10 minutes ago Up 10 minutes k8s_calico-node_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
1c4a702f0f18 rancher/pause-amd64:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
0da1cada08e1 rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 10 minutes ago Up 10 minutes kube-proxy
57f44998f34a rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 11 minutes ago Up 11 minutes kubelet
50f424c4daec rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 11 minutes ago Up 11 minutes kube-scheduler
502d327912d9 rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 11 minutes ago Up 11 minutes kube-controller-manager
9fc706bbf3a5 rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 11 minutes ago Up 11 minutes kube-apiserver
2e7630c2047c rancher/coreos-etcd:v3.2.18 "/usr/local/bin/etcd…" 11 minutes ago Up 11 minutes etcd
fef566337eb6 rancher/rke-tools:v0.1.15 "/opt/rke-tools/rke-…" 26 minutes ago Up 26 minutes etcd-rolling-snapshots
amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx default-http-backend-797c5bc547-m8hbx 1/1 Running 0 1h
ingress-nginx nginx-ingress-controller-2v7w7 1/1 Running 0 1h
kube-system canal-thmfg 3/3 Running 0 1h
kube-system kube-dns-7588d5b5f5-j66s8 3/3 Running 0 1h
kube-system kube-dns-autoscaler-5db9bbb766-rg5n8 1/1 Running 0 1h
kube-system metrics-server-97bc649d5-jd2rr 1/1 Running 0 1h
kube-system rke-ingress-controller-deploy-job-znp9n 0/1 Completed 0 1h
kube-system rke-kubedns-addon-deploy-job-dzxsj 0/1 Completed 0 1h
kube-system rke-metrics-addon-deploy-job-gpm4j 0/1 Completed 0 1h
kube-system rke-network-plugin-deploy-job-kqdds 0/1 Completed 0 1h
kube-system tiller-deploy-69458576b-khgr5 1/1 Running 0 1h |
DI 20190226-1: RKE up segmentation fault on 0.1.16 - use correct user
Code Block | ||
---|---|---|
| ||
amdocs@obriensystemsu0:~$ sudo rke up
Segmentation fault (core dumped)
# issue was I was using ubuntu as the yml user not amdocs in this case for a particular VM |
DI 20190227-1: Verify no 110 pod limit per VM
https://forums.rancher.com/t/solved-setting-max-pods/11866
Code Block | ||
---|---|---|
| ||
kubelet:
image: ""
extra_args:
max-pods: 900 |
DI 20190228-1: deploy casablanca MR to RKE under K8S 1.11.6, Docker 18.06, Helm 2.12.3
Code Block | ||
---|---|---|
| ||
sudo git clone https://gerrit.onap.org/r/logging-analytics
sudo wget https://git.onap.org/oom/plain/kubernetes/onap/resources/environments/dev.yaml
sudo cp dev.yaml dev0.yaml
sudo vi dev0.yaml
sudo cp dev0.yaml dev1.yaml
sudo cp logging-analytics/deploy/cd.sh .
sudo ./cd.sh -b casablanca -e onap -p false nexus3.onap.org:10001 -f true -s 300 -c true -d false -w false -r false
no good for helm 2.12.3 deployment - just using 2.9.1 for now -
Error: Chart incompatible with Tiller v2.12.3
in the casablanca branch only - flip
https://git.onap.org/oom/tree/kubernetes/onap/Chart.yaml?h=casablanca#n24
tillerVersion: "~2.9.1" |
DI 20190305-1: Azure 256G VM full ONAP Testing
Code Block | ||
---|---|---|
| ||
obrienbiometrics:oom michaelobrien$ ssh ubuntu@onap-dmz.onap.cloud
./oom_deployment.sh -b master -s rke.onap.cloud -e onap -r a_rke0_master -t _arm_deploy_onap_cd.json -p _arm_deploy_onap_rke_z_parameters.json |
DI 20190425: HA RKE install Testing
Manual first for RC0, later retrofit the script in https://git.onap.org/oom/tree/kubernetes/contrib/tools/rke/rke_setup.sh and move/adjust the heat template in https://git.onap.org/logging-analytics/tree/deploy/heat/logging_openstack_13_16g.yaml
Installing on 6 nodes on AWS (windriver is having an issue right now).
We are good on RKE 0.2.1, Ubuntu 18.04 / Kubernetes/kubectl 1.13.5 / Helm 2.13.1 / Docker 18.09.5
https://github.com/rancher/rke/releases RKE 0.2.2 has experimental k8s 1.14 support - running with 0.2.1 for now
Just need to test onap deployments
I'll do the NFS/EFS later before deployment
Code Block | ||
---|---|---|
| ||
# on all VMs (control, etcd, worker) # move the key to all vms scp ~/wse_onap/onap_rsa ubuntu@rke0.onap.info:~/ sudo curl https://releases.rancher.com/install-docker/18.09.sh | sh sudo usermod -aG docker ubuntu # nfs server # on control/etcd nodes only # from script sudo wget https://github.com/rancher/rke/releases/download/v0.2.1/rke_linux-amd64 mv rke_linux-amd64 rke sudo chmod +x rke sudo mv ./rke /usr/local/bin/rke # one time setup of the yaml or use the generated one ubuntu@ip-172-31-38-182:~$ sudo rke config [+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: ~/.ssh/onap_rsa [+] Number of Hosts [1]: 6 [+] SSH Address of host (1) [none]: 3.14.102.175 [+] SSH Port of host (1) [22]: [+] SSH Private Key Path of host (3.14.102.175) [none]: ~/.ssh/onap_rsa [+] SSH User of host (3.14.102.175) [ubuntu]: ubuntu [+] Is host (3.14.102.175) a Control Plane host (y/n)? [y]: y [+] Is host (3.14.102.175) a Worker host (y/n)? [n]: n [+] Is host (3.14.102.175) an etcd host (y/n)? [n]: y [+] Override Hostname of host (3.14.102.175) [none]: [+] Internal IP of host (3.14.102.175) [none]: [+] Docker socket path on host (3.14.102.175) [/var/run/docker.sock]: [+] SSH Address of host (2) [none]: 18.220.62.6 [+] SSH Port of host (2) [22]: [+] SSH Private Key Path of host (18.220.62.6) [none]: ~/.ssh/onap_rsa [+] SSH User of host (18.220.62.6) [ubuntu]: [+] Is host (18.220.62.6) a Control Plane host (y/n)? [y]: y [+] Is host (18.220.62.6) a Worker host (y/n)? [n]: n [+] Is host (18.220.62.6) an etcd host (y/n)? [n]: y [+] Override Hostname of host (18.220.62.6) [none]: [+] Internal IP of host (18.220.62.6) [none]: [+] Docker socket path on host (18.220.62.6) [/var/run/docker.sock]: [+] SSH Address of host (3) [none]: 18.217.96.12 [+] SSH Port of host (3) [22]: [+] SSH Private Key Path of host (18.217.96.12) [none]: ~/.ssh/onap_rsa [+] SSH User of host (18.217.96.12) [ubuntu]: [+] Is host (18.217.96.12) a Control Plane host (y/n)? [y]: y [+] Is host (18.217.96.12) a Worker host (y/n)? [n]: n [+] Is host (18.217.96.12) an etcd host (y/n)? [n]: y [+] Override Hostname of host (18.217.96.12) [none]: [+] Internal IP of host (18.217.96.12) [none]: [+] Docker socket path on host (18.217.96.12) [/var/run/docker.sock]: [+] SSH Address of host (4) [none]: 18.188.214.137 [+] SSH Port of host (4) [22]: [+] SSH Private Key Path of host (18.188.214.137) [none]: ~/.ssh/onap_rsa [+] SSH User of host (18.188.214.137) [ubuntu]: [+] Is host (18.188.214.137) a Control Plane host (y/n)? [y]: n [+] Is host (18.188.214.137) a Worker host (y/n)? [n]: y [+] Is host (18.188.214.137) an etcd host (y/n)? [n]: n [+] Override Hostname of host (18.188.214.137) [none]: [+] Internal IP of host (18.188.214.137) [none]: [+] Docker socket path on host (18.188.214.137) [/var/run/docker.sock]: [+] SSH Address of host (5) [none]: 18.220.70.253 [+] SSH Port of host (5) [22]: [+] SSH Private Key Path of host (18.220.70.253) [none]: ~/.ssh/onap_rsa [+] SSH User of host (18.220.70.253) [ubuntu]: [+] Is host (18.220.70.253) a Control Plane host (y/n)? [y]: n [+] Is host (18.220.70.253) a Worker host (y/n)? [n]: y [+] Is host (18.220.70.253) an etcd host (y/n)? [n]: n [+] Override Hostname of host (18.220.70.253) [none]: [+] Internal IP of host (18.220.70.253) [none]: [+] Docker socket path on host (18.220.70.253) [/var/run/docker.sock]: [+] SSH Address of host (6) [none]: 3.17.76.33 [+] SSH Port of host (6) [22]: [+] SSH Private Key Path of host (3.17.76.33) [none]: ~/.ssh/onap_rsa [+] SSH User of host (3.17.76.33) [ubuntu]: [+] Is host (3.17.76.33) a Control Plane host (y/n)? [y]: n [+] Is host (3.17.76.33) a Worker host (y/n)? [n]: y [+] Is host (3.17.76.33) an etcd host (y/n)? [n]: n [+] Override Hostname of host (3.17.76.33) [none]: [+] Internal IP of host (3.17.76.33) [none]: [+] Docker socket path on host (3.17.76.33) [/var/run/docker.sock]: [+] Network Plugin Type (flannel, calico, weave, canal) [canal]: [+] Authentication Strategy [x509]: [+] Authorization Mode (rbac, none) [rbac]: [+] Kubernetes Docker image [rancher/hyperkube:v1.13.5-rancher1]: [+] Cluster domain [cluster.local]: [+] Service Cluster IP Range [10.43.0.0/16]: [+] Enable PodSecurityPolicy [n]: [+] Cluster Network CIDR [10.42.0.0/16]: [+] Cluster DNS Service IP [10.43.0.10]: [+] Add addon manifest URLs or YAML files [no]: # new [+] Cluster domain [cluster.local]: ubuntu@ip-172-31-38-182:~$ sudo rke up INFO[0000] Initiating Kubernetes cluster INFO[0000] [certificates] Generating CA kubernetes certificates INFO[0000] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates INFO[0000] [certificates] Generating Kubernetes API server certificates INFO[0000] [certificates] Generating Service account token key INFO[0000] [certificates] Generating etcd-3.14.102.175 certificate and key INFO[0000] [certificates] Generating etcd-18.220.62.6 certificate and key INFO[0001] [certificates] Generating etcd-18.217.96.12 certificate and key INFO[0001] [certificates] Generating Kube Controller certificates INFO[0001] [certificates] Generating Kube Scheduler certificates INFO[0001] [certificates] Generating Kube Proxy certificates INFO[0001] [certificates] Generating Node certificate INFO[0001] [certificates] Generating admin certificates and kubeconfig INFO[0001] [certificates] Generating Kubernetes API server proxy client certificates INFO[0002] Successfully Deployed state file at [./cluster.rkestate] INFO[0002] Building Kubernetes cluster INFO[0002] [dialer] Setup tunnel for host [3.14.102.175] INFO[0002] [dialer] Setup tunnel for host [18.188.214.137] INFO[0002] [dialer] Setup tunnel for host [18.220.70.253] INFO[0002] [dialer] Setup tunnel for host [18.220.62.6] INFO[0002] [dialer] Setup tunnel for host [18.217.96.12] INFO[0002] [dialer] Setup tunnel for host [3.17.76.33] INFO[0002] [network] Deploying port listener containers INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.220.62.6] INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [3.14.102.175] INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.217.96.12] INFO[0006] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.220.62.6] INFO[0006] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.217.96.12] INFO[0006] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [3.14.102.175] INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [18.220.62.6] INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [18.217.96.12] INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [3.14.102.175] INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [18.217.96.12] INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [18.220.62.6] INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [3.14.102.175] INFO[0008] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.188.214.137] INFO[0008] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.220.70.253] INFO[0008] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [3.17.76.33] INFO[0012] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [3.17.76.33] INFO[0012] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.220.70.253] INFO[0012] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.188.214.137] INFO[0013] [network] Successfully started [rke-worker-port-listener] container on host [3.17.76.33] INFO[0013] [network] Successfully started [rke-worker-port-listener] container on host [18.220.70.253] INFO[0013] [network] Successfully started [rke-worker-port-listener] container on host [18.188.214.137] INFO[0013] [network] Port listener containers deployed successfully INFO[0013] [network] Running etcd <-> etcd port checks INFO[0013] [network] Successfully started [rke-port-checker] container on host [18.220.62.6] INFO[0013] [network] Successfully started [rke-port-checker] container on host [3.14.102.175] INFO[0013] [network] Successfully started [rke-port-checker] container on host [18.217.96.12] INFO[0014] [network] Running control plane -> etcd port checks INFO[0014] [network] Successfully started [rke-port-checker] container on host [18.220.62.6] INFO[0014] [network] Successfully started [rke-port-checker] container on host [3.14.102.175] INFO[0014] [network] Successfully started [rke-port-checker] container on host [18.217.96.12] INFO[0014] [network] Running control plane -> worker port checks INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.220.62.6] INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.217.96.12] INFO[0015] [network] Successfully started [rke-port-checker] container on host [3.14.102.175] INFO[0015] [network] Running workers -> control plane port checks INFO[0015] [network] Successfully started [rke-port-checker] container on host [3.17.76.33] INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.220.70.253] INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.188.214.137] INFO[0016] [network] Checking KubeAPI port Control Plane hosts INFO[0016] [network] Removing port listener containers INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [18.220.62.6] INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [3.14.102.175] INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [18.217.96.12] INFO[0016] [remove/rke-cp-port-listener] Successfully removed container on host [18.217.96.12] INFO[0016] [remove/rke-cp-port-listener] Successfully removed container on host [3.14.102.175] INFO[0016] [remove/rke-cp-port-listener] Successfully removed container on host [18.220.62.6] INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [18.220.70.253] INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [3.17.76.33] INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [18.188.214.137] INFO[0017] [network] Port listener containers removed successfully INFO[0017] [certificates] Deploying kubernetes certificates to Cluster nodes INFO[0022] [reconcile] Rebuilding and updating local kube config INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] INFO[0022] [certificates] Successfully deployed kubernetes certificates to Cluster nodes INFO[0022] [reconcile] Reconciling cluster state INFO[0022] [reconcile] This is newly generated cluster INFO[0022] Pre-pulling kubernetes images INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [3.14.102.175] INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.188.214.137] INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.70.253] INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.217.96.12] INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [3.17.76.33] INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.62.6] INFO[0038] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.62.6] INFO[0038] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.70.253] INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [3.17.76.33] INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.217.96.12] INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.188.214.137] INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [3.14.102.175] INFO[0039] Kubernetes images pulled successfully INFO[0039] [etcd] Building up etcd plane.. INFO[0039] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24-rancher1] on host [3.14.102.175] INFO[0041] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24-rancher1] on host [3.14.102.175] INFO[0051] [etcd] Successfully started [etcd] container on host [3.14.102.175] INFO[0051] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [3.14.102.175] INFO[0052] [etcd] Successfully started [etcd-rolling-snapshots] container on host [3.14.102.175] INFO[0057] [certificates] Successfully started [rke-bundle-cert] container on host [3.14.102.175] INFO[0058] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [3.14.102.175] INFO[0058] [etcd] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0058] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0058] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.220.62.6] INFO[0063] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.220.62.6] INFO[0069] [etcd] Successfully started [etcd] container on host [18.220.62.6] INFO[0069] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [18.220.62.6] INFO[0069] [etcd] Successfully started [etcd-rolling-snapshots] container on host [18.220.62.6] INFO[0075] [certificates] Successfully started [rke-bundle-cert] container on host [18.220.62.6] INFO[0075] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [18.220.62.6] INFO[0076] [etcd] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0076] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0076] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.217.96.12] INFO[0078] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.217.96.12] INFO[0078] [etcd] Successfully started [etcd] container on host [18.217.96.12] INFO[0078] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [18.217.96.12] INFO[0078] [etcd] Successfully started [etcd-rolling-snapshots] container on host [18.217.96.12] INFO[0084] [certificates] Successfully started [rke-bundle-cert] container on host [18.217.96.12] INFO[0084] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [18.217.96.12] INFO[0085] [etcd] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0085] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0085] [etcd] Successfully started etcd plane.. Checking etcd cluster health INFO[0086] [controlplane] Building up Controller Plane.. INFO[0086] [controlplane] Successfully started [kube-apiserver] container on host [18.220.62.6] INFO[0086] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [18.220.62.6] INFO[0086] [controlplane] Successfully started [kube-apiserver] container on host [3.14.102.175] INFO[0086] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [3.14.102.175] INFO[0086] [controlplane] Successfully started [kube-apiserver] container on host [18.217.96.12] INFO[0086] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [18.217.96.12] INFO[0098] [healthcheck] service [kube-apiserver] on host [18.220.62.6] is healthy INFO[0099] [healthcheck] service [kube-apiserver] on host [18.217.96.12] is healthy INFO[0099] [controlplane] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0099] [controlplane] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0099] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0099] [healthcheck] service [kube-apiserver] on host [3.14.102.175] is healthy INFO[0099] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0099] [controlplane] Successfully started [kube-controller-manager] container on host [18.220.62.6] INFO[0099] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [18.220.62.6] INFO[0100] [controlplane] Successfully started [kube-controller-manager] container on host [18.217.96.12] INFO[0100] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [18.217.96.12] INFO[0100] [controlplane] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0100] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0100] [healthcheck] service [kube-controller-manager] on host [18.220.62.6] is healthy INFO[0100] [healthcheck] service [kube-controller-manager] on host [18.217.96.12] is healthy INFO[0100] [controlplane] Successfully started [kube-controller-manager] container on host [3.14.102.175] INFO[0100] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [3.14.102.175] INFO[0101] [controlplane] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0101] [controlplane] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0101] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0101] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0101] [controlplane] Successfully started [kube-scheduler] container on host [18.220.62.6] INFO[0101] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [18.220.62.6] INFO[0101] [healthcheck] service [kube-controller-manager] on host [3.14.102.175] is healthy INFO[0101] [controlplane] Successfully started [kube-scheduler] container on host [18.217.96.12] INFO[0101] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [18.217.96.12] INFO[0102] [controlplane] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0102] [healthcheck] service [kube-scheduler] on host [18.220.62.6] is healthy INFO[0102] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0102] [healthcheck] service [kube-scheduler] on host [18.217.96.12] is healthy INFO[0103] [controlplane] Successfully started [kube-scheduler] container on host [3.14.102.175] INFO[0103] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [3.14.102.175] INFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0103] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0103] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0103] [healthcheck] service [kube-scheduler] on host [3.14.102.175] is healthy INFO[0104] [controlplane] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0104] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0104] [controlplane] Successfully started Controller Plane.. INFO[0104] [authz] Creating rke-job-deployer ServiceAccount INFO[0104] [authz] rke-job-deployer ServiceAccount created successfully INFO[0104] [authz] Creating system:node ClusterRoleBinding INFO[0104] [authz] system:node ClusterRoleBinding created successfully INFO[0104] Successfully Deployed state file at [./cluster.rkestate] INFO[0104] [state] Saving full cluster state to Kubernetes INFO[0104] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: cluster-state INFO[0104] [worker] Building up Worker Plane.. INFO[0104] [sidekick] Sidekick container already created on host [18.220.62.6] INFO[0104] [sidekick] Sidekick container already created on host [3.14.102.175] INFO[0104] [sidekick] Sidekick container already created on host [18.217.96.12] INFO[0105] [worker] Successfully started [kubelet] container on host [3.14.102.175] INFO[0105] [healthcheck] Start Healthcheck on service [kubelet] on host [3.14.102.175] INFO[0105] [worker] Successfully started [kubelet] container on host [18.220.62.6] INFO[0105] [healthcheck] Start Healthcheck on service [kubelet] on host [18.220.62.6] INFO[0105] [worker] Successfully started [kubelet] container on host [18.217.96.12] INFO[0105] [healthcheck] Start Healthcheck on service [kubelet] on host [18.217.96.12] INFO[0105] [worker] Successfully started [nginx-proxy] container on host [18.220.70.253] INFO[0105] [worker] Successfully started [nginx-proxy] container on host [3.17.76.33] INFO[0105] [worker] Successfully started [nginx-proxy] container on host [18.188.214.137] INFO[0106] [worker] Successfully started [rke-log-linker] container on host [18.220.70.253] INFO[0106] [worker] Successfully started [rke-log-linker] container on host [3.17.76.33] INFO[0106] [worker] Successfully started [rke-log-linker] container on host [18.188.214.137] INFO[0106] [remove/rke-log-linker] Successfully removed container on host [3.17.76.33] INFO[0106] [remove/rke-log-linker] Successfully removed container on host [18.220.70.253] INFO[0106] [remove/rke-log-linker] Successfully removed container on host [18.188.214.137] INFO[0106] [worker] Successfully started [kubelet] container on host [3.17.76.33] INFO[0106] [healthcheck] Start Healthcheck on service [kubelet] on host [3.17.76.33] INFO[0107] [worker] Successfully started [kubelet] container on host [18.220.70.253] INFO[0107] [healthcheck] Start Healthcheck on service [kubelet] on host [18.220.70.253] INFO[0107] [worker] Successfully started [kubelet] container on host [18.188.214.137] INFO[0107] [healthcheck] Start Healthcheck on service [kubelet] on host [18.188.214.137] INFO[0111] [healthcheck] service [kubelet] on host [18.220.62.6] is healthy INFO[0111] [healthcheck] service [kubelet] on host [18.217.96.12] is healthy INFO[0111] [healthcheck] service [kubelet] on host [3.14.102.175] is healthy INFO[0111] [worker] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0111] [worker] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0111] [worker] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0112] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0112] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0112] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0112] [worker] Successfully started [kube-proxy] container on host [18.220.62.6] INFO[0112] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.220.62.6] INFO[0112] [worker] Successfully started [kube-proxy] container on host [18.217.96.12] INFO[0112] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.217.96.12] INFO[0112] [worker] Successfully started [kube-proxy] container on host [3.14.102.175] INFO[0112] [healthcheck] Start Healthcheck on service [kube-proxy] on host [3.14.102.175] INFO[0113] [healthcheck] service [kube-proxy] on host [18.220.62.6] is healthy INFO[0113] [healthcheck] service [kube-proxy] on host [18.217.96.12] is healthy INFO[0113] [healthcheck] service [kube-proxy] on host [3.14.102.175] is healthy INFO[0113] [healthcheck] service [kubelet] on host [18.220.70.253] is healthy INFO[0113] [healthcheck] service [kubelet] on host [3.17.76.33] is healthy INFO[0113] [healthcheck] service [kubelet] on host [18.188.214.137] is healthy INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.220.62.6] INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.217.96.12] INFO[0113] [worker] Successfully started [rke-log-linker] container on host [3.14.102.175] INFO[0113] [worker] Successfully started [rke-log-linker] container on host [3.17.76.33] INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.220.70.253] INFO[0113] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.188.214.137] INFO[0113] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] INFO[0113] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] INFO[0114] [remove/rke-log-linker] Successfully removed container on host [3.17.76.33] INFO[0114] [remove/rke-log-linker] Successfully removed container on host [18.220.70.253] INFO[0114] [remove/rke-log-linker] Successfully removed container on host [18.188.214.137] INFO[0114] [worker] Successfully started [kube-proxy] container on host [3.17.76.33] INFO[0114] [healthcheck] Start Healthcheck on service [kube-proxy] on host [3.17.76.33] INFO[0114] [worker] Successfully started [kube-proxy] container on host [18.220.70.253] INFO[0114] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.220.70.253] INFO[0114] [worker] Successfully started [kube-proxy] container on host [18.188.214.137] INFO[0114] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.188.214.137] INFO[0114] [healthcheck] service [kube-proxy] on host [18.220.70.253] is healthy INFO[0114] [healthcheck] service [kube-proxy] on host [3.17.76.33] is healthy INFO[0115] [healthcheck] service [kube-proxy] on host [18.188.214.137] is healthy INFO[0115] [worker] Successfully started [rke-log-linker] container on host [18.220.70.253] INFO[0115] [worker] Successfully started [rke-log-linker] container on host [3.17.76.33] INFO[0115] [worker] Successfully started [rke-log-linker] container on host [18.188.214.137] INFO[0115] [remove/rke-log-linker] Successfully removed container on host [18.220.70.253] INFO[0115] [remove/rke-log-linker] Successfully removed container on host [3.17.76.33] INFO[0115] [remove/rke-log-linker] Successfully removed container on host [18.188.214.137] INFO[0115] [worker] Successfully started Worker Plane.. INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.188.214.137] INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [3.17.76.33] INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.220.70.253] INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.220.62.6] INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.217.96.12] INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [3.14.102.175] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [3.17.76.33] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.188.214.137] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.220.62.6] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.220.70.253] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.217.96.12] INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [3.14.102.175] INFO[0116] [sync] Syncing nodes Labels and Taints INFO[0117] [sync] Successfully synced nodes Labels and Taints INFO[0117] [network] Setting up network plugin: canal INFO[0117] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes INFO[0117] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes INFO[0117] [addons] Executing deploy job rke-network-plugin INFO[0122] [addons] Setting up kube-dns INFO[0122] [addons] Saving ConfigMap for addon rke-kube-dns-addon to Kubernetes INFO[0122] [addons] Successfully saved ConfigMap for addon rke-kube-dns-addon to Kubernetes INFO[0122] [addons] Executing deploy job rke-kube-dns-addon INFO[0127] [addons] kube-dns deployed successfully INFO[0127] [dns] DNS provider kube-dns deployed successfully INFO[0127] [addons] Setting up Metrics Server INFO[0127] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0127] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0127] [addons] Executing deploy job rke-metrics-addon INFO[0132] [addons] Metrics Server deployed successfully INFO[0132] [ingress] Setting up nginx ingress controller INFO[0132] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0132] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0132] [addons] Executing deploy job rke-ingress-controller INFO[0137] [ingress] ingress controller nginx deployed successfully INFO[0137] [addons] Setting up user addons INFO[0137] [addons] no user addons defined INFO[0137] Finished building Kubernetes cluster successfully # finish kubectl install sudo curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.5/bin/linux/amd64/kubectl sudo chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl sudo mkdir ~/.kube # finish helm #https://github.com/helm/helm/releases # there is no helm 2.12.5 - last is 2.12.3 - trying 2.13.1 wget http://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz sudo tar -zxvf helm-v2.13.1-linux-amd64.tar.gz sudo cp kube_config_cluster.yml ~/.kube/config sudo chmod 777 ~/.kube/config # test ubuntu@ip-172-31-38-182:~$ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx default-http-backend-78fccfc5d9-f6z25 1/1 Running 0 17m 10.42.5.2 18.220.70.253 <none> <none> ingress-nginx nginx-ingress-controller-2zxs7 1/1 Running 0 17m 18.188.214.137 18.188.214.137 <none> <none> ingress-nginx nginx-ingress-controller-6b7gs 1/1 Running 0 17m 3.17.76.33 3.17.76.33 <none> <none> ingress-nginx nginx-ingress-controller-nv4qg 1/1 Running 0 17m 18.220.70.253 18.220.70.253 <none> <none> kube-system canal-48579 2/2 Running 0 17m 18.220.62.6 18.220.62.6 <none> <none> kube-system canal-6skkm 2/2 Running 0 17m 18.188.214.137 18.188.214.137 <none> <none> kube-system canal-9xmxv 2/2 Running 0 17m 18.217.96.12 18.217.96.12 <none> <none> kube-system canal-c582x 2/2 Running 0 17m 18.220.70.253 18.220.70.253 <none> <none> kube-system canal-whbck 2/2 Running 0 17m 3.14.102.175 3.14.102.175 <none> <none> kube-system canal-xbbnh 2/2 Running 0 17m 3.17.76.33 3.17.76.33 <none> <none> kube-system kube-dns-58bd5b8dd7-6mcm7 3/3 Running 0 17m 10.42.3.3 3.17.76.33 <none> <none> kube-system kube-dns-58bd5b8dd7-cd5dg 3/3 Running 0 17m 10.42.4.2 18.188.214.137 <none> <none> kube-system kube-dns-autoscaler-77bc5fd84-p4zfd 1/1 Running 0 17m 10.42.3.2 3.17.76.33 <none> <none> kube-system metrics-server-58bd5dd8d7-kftjn 1/1 Running 0 17m 10.42.3.4 3.17.76.33 <none> <none> # install tiller ubuntu@ip-172-31-38-182:~$ kubectl -n kube-system create serviceaccount tiller serviceaccount/tiller created ubuntu@ip-172-31-38-182:~$ kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller clusterrolebinding.rbac.authorization.k8s.io/tiller created ubuntu@ip-172-31-38-182:~$ helm init --service-account tiller Creating /home/ubuntu/.helm Creating /home/ubuntu/.helm/repository Creating /home/ubuntu/.helm/repository/cache Creating /home/ubuntu/.helm/repository/local Creating /home/ubuntu/.helm/plugins Creating /home/ubuntu/.helm/starters Creating /home/ubuntu/.helm/cache/archive Creating /home/ubuntu/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /home/ubuntu/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming! ubuntu@ip-172-31-38-182:~$ kubectl -n kube-system rollout status deploy/tiller-deploy deployment "tiller-deploy" successfully rolled out ubuntu@ip-172-31-38-182:~$ sudo helm init --upgrade $HELM_HOME has been configured at /home/ubuntu/.helm. Tiller (the Helm server-side component) has been upgraded to the current version. Happy Helming! ubuntu@ip-172-31-38-182:~$ sudo helm version Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} ubuntu@ip-172-31-38-182:~$ sudo helm serve & [1] 706 ubuntu@ip-172-31-38-182:~$ Regenerating index. This may take a moment. Now serving you on 127.0.0.1:8879 ubuntu@ip-172-31-38-182:~$ sudo helm list ubuntu@ip-172-31-38-182:~$ sudo helm repo add local http://127.0.0.1:8879 "local" has been added to your repositories ubuntu@ip-172-31-38-182:~$ sudo helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879 ubuntu@ip-172-31-38-182:~$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 18.188.214.137 Ready worker 22m v1.13.5 18.188.214.137 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 18.217.96.12 Ready controlplane,etcd 22m v1.13.5 18.217.96.12 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 18.220.62.6 Ready controlplane,etcd 22m v1.13.5 18.220.62.6 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 18.220.70.253 Ready worker 22m v1.13.5 18.220.70.253 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 3.14.102.175 Ready controlplane,etcd 22m v1.13.5 3.14.102.175 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 3.17.76.33 Ready worker 22m v1.13.5 3.17.76.33 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.9.5 # install make sudo apt-get install make -y # install nfs/efs ubuntu@ip-172-31-38-182:~$ sudo apt-get install nfs-common -y ubuntu@ip-172-31-38-182:~$ sudo mkdir /dockerdata-nfs ubuntu@ip-172-31-38-182:~$ sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-5fd6ab26.efs.us-east-2.amazonaws.com:/ /dockerdata-nfs # check sudo nohup ./cd.sh -b master -e onap -p false -n nexus3.onap.org:10001 -f false -s 600 -c false -d false -w false -r false & ubuntu@ip-172-31-38-182:~$ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx default-http-backend-78fccfc5d9-f6z25 1/1 Running 0 103m 10.42.5.2 18.220.70.253 <none> <none> ingress-nginx nginx-ingress-controller-2zxs7 1/1 Running 0 103m 18.188.214.137 18.188.214.137 <none> <none> ingress-nginx nginx-ingress-controller-6b7gs 1/1 Running 0 103m 3.17.76.33 3.17.76.33 <none> <none> ingress-nginx nginx-ingress-controller-nv4qg 1/1 Running 0 103m 18.220.70.253 18.220.70.253 <none> <none> kube-system canal-48579 2/2 Running 0 103m 18.220.62.6 18.220.62.6 <none> <none> kube-system canal-6skkm 2/2 Running 0 103m 18.188.214.137 18.188.214.137 <none> <none> kube-system canal-9xmxv 2/2 Running 0 103m 18.217.96.12 18.217.96.12 <none> <none> kube-system canal-c582x 2/2 Running 0 103m 18.220.70.253 18.220.70.253 <none> <none> kube-system canal-whbck Wed Feb 20 02:27:18 2019 # install RKE sudo wget https://github.com/rancher/rke/releases/download/v0.1.16/rke_linux-amd64 mv rke_linux-amd64 rke sudo mv ./rke /usr/local/bin/rke ubuntu@a-rke:~$ rke --version rke version v0.1.16 |
...
scp your key to the box - ideally to ~/.ssh and chmod 400 it
Elastic Reserved IP
get a VIP or EIP
generate cluster.yaml
Code Block | ||
---|---|---|
| ||
azure config - no need to hand build the yaml
Watch the path of your 2 keys
Also don't add an "addon" until you have one of the config job will fail
{noformat}
ubuntu@a-rke:~$ rke config --name cluster.yml
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: ~/.ssh/onap_rsa
[+] Number of Hosts [1]:
[+] SSH Address of host (1) [none]: rke.onap.cloud
[+] SSH Port of host (1) [22]:
[+] SSH Private Key Path of host (rke.onap.cloud) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (rke.onap.cloud) [ubuntu]:
[+] Is host (rke.onap.cloud) a Control Plane host (y/n)? [y]: y
[+] Is host (rke.onap.cloud) a Worker host (y/n)? [n]: y
[+] Is host (rke.onap.cloud) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (rke.onap.cloud) [none]:
[+] Internal IP of host (rke.onap.cloud) [none]:
[+] Docker socket path on host (rke.onap.cloud) [/var/run/docker.sock]:
[+] Network Plugin Type (flannel, calico, weave, canal) [canal]:
[+] Authentication Strategy [x509]:
[+] Authorization Mode (rbac, none) [rbac]:
[+] Kubernetes Docker image [rancher/hyperkube:v1.11.6-rancher1]:
[+] Cluster domain [cluster.local]:
[+] Service Cluster IP Range [10.43.0.0/16]:
[+] Enable PodSecurityPolicy [n]:
[+] Cluster Network CIDR [10.42.0.0/16]:
[+] Cluster DNS Service IP [10.43.0.10]:
[+] Add addon manifest URLs or YAML files [no]: no
ubuntu@a-rke:~$ sudo cat cluster.yml
# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: rke.onap.cloud
port: "22"
internal_address: ""
role:
- controlplane
- worker
- etcd
hostname_override: ""
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/onap_rsa
labels: {}
services:
etcd:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
external_urls: []
ca_cert: ""
cert: ""
key: ""
path: ""
snapshot: null
retention: ""
creation: ""
kube-api:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: ""
pod_security_policy: false
kube-controller:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
scheduler:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
kubelet:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_domain: cluster.local
infra_container_image: ""
cluster_dns_server: 10.43.0.10
fail_swap_on: false
kubeproxy:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
network:
plugin: canal
options: {}
authentication:
strategy: x509
options: {}
sans: []
system_images:
etcd: rancher/coreos-etcd:v3.2.18
alpine: rancher/rke-tools:v0.1.15
nginx_proxy: rancher/rke-tools:v0.1.15
cert_downloader: rancher/rke-tools:v0.1.15
kubernetes_services_sidecar: rancher/rke-tools:v0.1.15
kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.10
dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10
kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.10
kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0
kubernetes: rancher/hyperkube:v1.11.6-rancher1
flannel: rancher/coreos-flannel:v0.10.0
flannel_cni: rancher/coreos-flannel-cni:v0.3.0
calico_node: rancher/calico-node:v3.1.3
calico_cni: rancher/calico-cni:v3.1.3
calico_controllers: ""
calico_ctl: rancher/calico-ctl:v2.0.0
canal_node: rancher/calico-node:v3.1.3
canal_cni: rancher/calico-cni:v3.1.3
canal_flannel: rancher/coreos-flannel:v0.10.0
wave_node: weaveworks/weave-kube:2.1.2
weave_cni: weaveworks/weave-npc:2.1.2
pod_infra_container: rancher/pause-amd64:3.1
ingress: rancher/nginx-ingress-controller:0.16.2-rancher1
ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.4
metrics_server: rancher/metrics-server-amd64:v0.2.1
ssh_key_path: ~/.ssh/onap_rsa
ssh_agent_auth: false
authorization:
mode: rbac
options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
provider: ""
options: {}
node_selector: {}
extra_args: {}
cluster_name: ""
cloud_provider:
name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
address: ""
port: ""
user: ""
ssh_key: ""
ssh_key_path: ""
monitoring:
provider: ""
options: {}
{noformat}
|
Kubernetes Single Node Developer Installation
Code Block | ||
---|---|---|
| ||
sudo chmod 777 cluster.yml rke up # install kubectl sudo curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.11.6/bin/linux/amd64/kubectl sudo chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl sudo mkdir ~/.kube sudo cp kube_config_cluster.yml ~/.kube/config sudo chmod 777 ~/.kube/config kubectl get pods --all-namespaces NAMESPACE NAME 2/2 Running 0 103m 3.14.102.175 3.14.102.175 <none> <none> kube-system canal-xbbnh 2/2 Running 0 103m 3.17.76.33 3.17.76.33 <none> <none> kube-system kube-dns-58bd5b8dd7-6mcm7 3/3 Running 0 103m 10.42.3.3 3.17.76.33 <none> <none> kube-system kube-dns-58bd5b8dd7-cd5dg 3/3 Running 0 103m 10.42.4.2 18.188.214.137 <none> <none> kube-system kube-dns-autoscaler-77bc5fd84-p4zfd 1/1 Running 0 103m 10.42.3.2 3.17.76.33 <none> <none> kube-system metrics-server-58bd5dd8d7-kftjn 1/1 Running 0 103m 10.42.3.4 3.17.76.33 <none> <none> kube-system tiller-deploy-5f4fc5bcc6-gc4tc 1/1 Running 0 84m 10.42.5.3 18.220.70.253 <none> <none> onap onap-aai-aai-587cb79c6d-mzpbs 0/1 Init:0/1 1 13m 10.42.5.17 18.220.70.253 <none> <none> onap onap-aai-aai-babel-8c755bcfc-kmzdm 2/2 Running 0 29m 10.42.5.8 18.220.70.253 <none> <none> onap onap-aai-aai-champ-78b9d7f68b-98tm9 0/2 Init:0/1 2 29m 10.42.3.6 3.17.76.33 <none> READY <none> STATUSonap RESTARTS AGE ingress-nginx default-http-backend-797c5bc547-45msronap-aai-aai-data-router-64fcfbc5bb-wkkvz 1/1 Running 1/2 0 CrashLoopBackOff 9 17m ingress-nginx nginx-ingress-controller-dfhp8 29m 10.42.5.7 1/118.220.70.253 <none> Running 0 <none> onap 17m kube-system canal-lc6g6onap-aai-aai-elasticsearch-6dcf5d9966-j7z67 1/1 Running 0 3/3 Running29m 010.42.4.4 18.188.214.137 17m kube-system<none> kube-dns-7588d5b5f5-6k286 <none> onap 3/3onap-aai-aai-gizmo-5bddb87589-zn8pl Running 0 17m kube-system2/2 kube-dns-autoscaler-5db9bbb766-6slz7Running 1/1 0 Running 0 29m 10.42.4.3 17m kube-system 18.188.214.137 metrics-server-97bc649d5-q84tz <none> <none> onap 1/1 Running 0onap-aai-aai-graphadmin-774f9d698f-f8lwv 17m kube-system rke-ingress-controller-deploy-job-5q2w70/2 Init:0/1 Completed 0 2 17m kube-system29m rke-kubedns-addon-deploy-job-7vq4910.42.5.4 18.220.70.253 0/1 <none> Completed 0 <none> onap 17m kube-system rke-metrics-addon-deploy-job-2hnblonap-aai-aai-graphadmin-create-db-schema-94q4l 0/1 Init:Error Completed 0 18m 17m kube-system 10.42.4.16 rke-network-plugin-deploy-job-6fzt2 18.188.214.137 <none> 0/1 Completed<none> onap 0 17m |
Kubernetes HA Cluster Production Installation
Design Issues
DI 20190225-1: RKE/Docker version pair
As of 20190215 RKE 0.16 supports Docker 18.06-ce (and 18.09 non-ce) (up from 0.15 supporting 17.03)
https://github.com/docker/docker-ce/releases/tag/v18.06.3-ce
https://github.com/rancher/rke/releases/tag/v0.1.16
Code Block | ||
---|---|---|
| ||
ubuntu@a-rke:~$ sudo rke up INFO[0000] Building Kubernetes cluster onap-aai-aai-graphadmin-create-db-schema-s42pq 0/1 Init:0/1 0 7m54s INFO[0000] [dialer] Setup tunnel for host [rke.onap.cloud] FATA[0000] Unsupported Docker version found [18.06.3-ce], supported versions are [1.11.x 1.12.x 1.13.x 17.03.x] |
DI 20190225-2: RKE upgrade from 0.15 to 0.16 - not working
Does rke remove, regenerate the yaml (or hand upgrade the versions) then rke up
Code Block | ||
---|---|---|
| ||
ubuntu@a-rke:~$ sudo rke remove Are you sure you want to remove Kubernetes cluster [y/n]: y INFO[0002] Tearing down Kubernetes cluster10.42.5.20 18.220.70.253 <none> <none> onap onap-aai-aai-graphadmin-create-db-schema-tsvcw 0/1 Init:Error 0 29m INFO[0002] [dialer] Setup tunnel for host [rke.onap.cloud] INFO[0002] [worker] Tearing down Worker Plane..10.42.4.5 18.188.214.137 <none> <none> onap INFO[0002] [remove/kubelet] Successfully removed container on host [rke.onap.cloud] INFO[0003] [remove/kube-proxy] Successfully removed container on host [rke.onap.cloud] INFO[0003] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud] INFO[0003] [worker] Successfully tore down Worker Plane.. INFO[0003] [controlplane] Tearing down the Controller Plane.. INFO[0003] [remove/kube-apiserver] Successfully removed container on host [rke.onap.cloud] INFO[0003] [remove/kube-controller-manager] Successfully removed container on host [rke.onap.cloud] INFO[0004] [remove/kube-scheduler] Successfully removed container on host [rke.onap.cloud] INFO[0004] [controlplane] Host [rke.onap.cloud] is already a worker host, skipping delete kubelet and kubeproxy. INFO[0004] [controlplane] Successfully tore down Controller Plane.. INFO[0004] [etcd] Tearing down etcd plane.. onap-aai-aai-modelloader-845fc684bd-r7mdw 2/2 Running 0 29m 10.42.4.6 18.188.214.137 <none> <none> onap onap-aai-aai-resources-67f8dfcbdb-kz6cp 0/2 Init:0/1 INFO[0004] [remove/etcd] Successfully removed container on host [rke.onap.cloud] INFO[0004] [etcd] Successfully tore down etcd plane.. INFO[0004] [hosts] Cleaning up host [rke.onap.cloud] INFO[0004] [hosts] Cleaning up host [rke.onap.cloud] INFO[0004] [hosts] Running cleaner container on host [rke.onap.cloud] INFO[0005] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud] INFO[0005] [hosts] Removing cleaner container on host [rke.onap.cloud] INFO[0005] [hosts] Removing dead container logs on host [rke.onap.cloud] INFO[0006] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud] INFO[0006] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud] INFO[0006] [hosts] Successfully cleaned up host [rke.onap.cloud] INFO[0006] [hosts] Cleaning up host [rke.onap.cloud] INFO[0006] [hosts] Cleaning up host [rke.onap.cloud] INFO[0006] [hosts] Running cleaner container on host [rke.onap.cloud] INFO[0007] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud] INFO[0008] [hosts] Removing cleaner container on host [rke.onap.cloud] INFO[0008] [hosts] Removing dead container logs on host [rke.onap.cloud] INFO[0008] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud] INFO[0009] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud] INFO[0009] [hosts] Successfully cleaned up host [rke.onap.cloud] INFO[0009] [hosts] Cleaning up host [rke.onap.cloud] INFO[0009] [hosts] Cleaning up host [rke.onap.cloud] INFO[0009] [hosts] Running cleaner container on host [rke.onap.cloud] INFO[0010] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud] INFO[0010] [hosts] Removing cleaner container on host [rke.onap.cloud] INFO[0010] [hosts] Removing dead container logs on host [rke.onap.cloud] INFO[0011] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud] INFO[0011] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud] INFO[0011] [hosts] Successfully cleaned up host [rke.onap.cloud] INFO[0011] Removing local admin Kubeconfig: ./kube_config_cluster.yml INFO[0011] Local admin Kubeconfig removed successfully INFO[0011] Cluster removed successfully ubuntu@a-rke:~$ rke config --name cluster.ym ubuntu@a-rke:~$ sudo rke up INFO[0000] Building Kubernetes cluster2 29m 10.42.5.11 18.220.70.253 <none> <none> onap onap-aai-aai-schema-service-6c56b45b7c-7zlfz 2/2 Running 0 29m 10.42.3.7 3.17.76.33 <none> <none> onap onap-aai-aai-search-data-5d8d7759b8-flxwj 2/2 Running 0 29m 10.42.3.9 3.17.76.33 <none> <none> onap onap-aai-aai-sparky-be-8444df749c-mzc2n 0/2 Init:0/1 0 29m 10.42.4.11 18.188.214.137 <none> <none> onap onap-aai-aai-spike-54ff77787f-d678x 2/2 Running INFO[0000] [dialer] Setup tunnel for host [rke.onap.cloud] INFO[0000] [network] Deploying port listener containers INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [rke.onap.cloud] INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [rke.onap.cloud] INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [rke.onap.cloud] INFO[0002] [network] Port listener containers deployed successfully INFO[0002] [network] Running control plane -> etcd port checks INFO[0003] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud] INFO[0003] [network] Running control plane -> worker port checks INFO[0004] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud] INFO[0004] [network] Running workers -> control plane port checks INFO[0005] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud] INFO[0005] [network] Checking KubeAPI port Control Plane hosts INFO[0005] [network] Removing port listener containers INFO[0005] [remove/rke-etcd-port-listener] Successfully removed container on host [rke.onap.cloud] INFO[0006] [remove/rke-cp-port-listener] Successfully removed container on host [rke.onap.cloud] INFO[0006] [remove/rke-worker-port-listener] Successfully removed container on host [rke.onap.cloud] INFO[0006] [network] Port listener containers removed successfully INFO[0006] [certificates] Attempting to recover certificates from backup on [etcd,controlPlane] hosts INFO[0007] [certificates] No Certificate backup found on [etcd,controlPlane] hosts INFO[0007] [certificates] Generating CA kubernetes certificates INFO[0007] [certificates] Generating Kubernetes API server certficates INFO[0008] [certificates] Generating Kube Controller certificates INFO[0008] [certificates] Generating Kube Scheduler certificates INFO[0008] [certificates] Generating Kube Proxy certificates INFO[0009] [certificates] Generating Node certificate INFO[0009] [certificates] Generating admin certificates and kubeconfig INFO[0009] [certificates] Generating etcd-rke.onap.cloud certificate and key INFO[0009] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates INFO[0009] [certificates] Generating Kubernetes API server proxy client certificates INFO[0010] [certificates] Temporarily saving certs to [etcd,controlPlane] hosts INFO[0016] [certificates] Saved certs to [etcd,controlPlane] hosts INFO[0016] [reconcile] Reconciling cluster state INFO[0016] [reconcile] This is newly generated cluster INFO[0016] [certificates] Deploying kubernetes certificates to Cluster nodes INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] INFO[0022] [certificates] Successfully deployed kubernetes certificates to Cluster nodes INFO[0022] Pre-pulling kubernetes images0 29m 10.42.5.6 18.220.70.253 <none> <none> onap onap-aai-aai-traversal-6ff868f477-lzv2f 0/2 Init:0/1 2 29m 10.42.3.8 3.17.76.33 <none> <none> onap onap-aai-aai-traversal-update-query-data-9g2b8 0/1 Init:0/1 2 29m 10.42.5.12 18.220.70.253 <none> <none> onap onap-dmaap-dbc-pg-0 1/1 Running 0 29m 10.42.5.9 18.220.70.253 <none> <none> onap onap-dmaap-dbc-pg-1 INFO[0022] Kubernetes images pulled1/1 successfully Running INFO[0022] [etcd] Building up etcd plane.. 0 26m INFO[0023] [etcd] Successfully started [etcd] container on host [rke.onap.cloud] INFO[0023] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [rke.onap.cloud] INFO[0028] [certificates] Successfully started [rke-bundle-cert] container on host [rke.onap.cloud] INFO[0029] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [rke.onap.cloud] INFO[0029] [etcd] Successfully started [rke-log-linker] container on host [rke.onap.cloud] INFO[0030] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] INFO[0030] [etcd] Successfully started etcd plane.. INFO[0030] [controlplane] Building up Controller Plane.. INFO[0031] [controlplane] Successfully started [kube-apiserver] container on host [rke.onap.cloud] INFO[0031] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [rke.onap.cloud] INFO[0045] [healthcheck] service [kube-apiserver] on host [rke.onap.cloud] is healthy INFO[0046] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud] INFO[0046] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] INFO[0047] [controlplane] Successfully started [kube-controller-manager] container on host [rke.onap.cloud] INFO[0047] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [rke.onap.cloud] INFO[0052] [healthcheck] service [kube-controller-manager] on host [rke.onap.cloud] is healthy INFO[0053] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud] INFO[0053] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] INFO[0054] [controlplane] Successfully started [kube-scheduler] container on host [rke.onap.cloud] INFO[0054] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [rke.onap.cloud] INFO[0059] [healthcheck] service [kube-scheduler] on host [rke.onap.cloud] is healthy INFO[0060] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud] INFO[0060] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] INFO[0060] [controlplane] Successfully started Controller Plane.. INFO[0060] [authz] Creating rke-job-deployer ServiceAccount INFO[0060] [authz] rke-job-deployer ServiceAccount created successfully INFO[0060] [authz] Creating system:node ClusterRoleBinding INFO[0060] [authz] system:node ClusterRoleBinding created successfully INFO[0060] [certificates] Save kubernetes certificates as secrets INFO[0060] [certificates] Successfully saved certificates as kubernetes secret [k8s-certs] INFO[0060] [state] Saving cluster state to Kubernetes INFO[0061] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state INFO[0061] [state] Saving cluster state to cluster nodes INFO[0061] [state] Successfully started [cluster-state-deployer] container on host [rke.onap.cloud] INFO[0062] [remove/cluster-state-deployer] Successfully removed container on host [rke.onap.cloud] INFO[0062] [worker] Building up Worker Plane.. INFO[0062] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud] INFO[0063] [worker] Successfully started [kubelet] container on host [rke.onap.cloud] INFO[0063] [healthcheck] Start Healthcheck on service [kubelet] on host [rke.onap.cloud] INFO[0068] [healthcheck] service [kubelet] on host [rke.onap.cloud] is healthy INFO[0069] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud] INFO[0070] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] INFO[0070] [worker] Successfully started [kube-proxy] container on host [rke.onap.cloud] INFO[0070] [healthcheck] Start Healthcheck on service [kube-proxy] on host [rke.onap.cloud] INFO[0076] [healthcheck] service [kube-proxy] on host [rke.onap.cloud] is healthy INFO[0076] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud] INFO[0077] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] INFO[0077] [worker] Successfully started Worker Plane.. INFO[0077] [sync] Syncing nodes Labels and Taints INFO[0077] [sync] Successfully synced nodes Labels and Taints INFO[0077] [network] Setting up network plugin: canal INFO[0077] [addons] Saving addon ConfigMap to Kubernetes INFO[0077] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin INFO[0077] [addons] Executing deploy job.. INFO[0082] [addons] Setting up KubeDNS 10.42.3.14 3.17.76.33 <none> <none> onap onap-dmaap-dbc-pgpool-8666b57857-97zjc 1/1 Running 0 29m 10.42.5.5 18.220.70.253 <none> <none> onap onap-dmaap-dbc-pgpool-8666b57857-vr8gk 1/1 Running 0 29m 10.42.4.8 18.188.214.137 <none> <none> onap onap-dmaap-dmaap-bc-745995bf74-m6hhq 0/1 Init:0/2 2 29m 10.42.4.12 18.188.214.137 <none> <none> onap onap-dmaap-dmaap-bc-post-install-6ff4j 1/1 Running 0 29m 10.42.4.9 18.188.214.137 <none> <none> onap onap-dmaap-dmaap-dr-db-0 1/1 Running 0 29m 10.42.4.10 18.188.214.137 <none> <none> onap onap-dmaap-dmaap-dr-db-1 1/1 Running 1 24m 10.42.5.15 18.220.70.253 <none> <none> onap INFO[0082] [addons] Saving addon ConfigMap to Kubernetes INFO[0082] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-kubedns-addon INFO[0082] [addons] Executing deploy job.. onap-dmaap-dmaap-dr-node-0 2/2 INFO[0087] [addons] KubeDNS deployedRunning successfully.. INFO[0087] [addons] Setting up Metrics Server0 29m INFO[0087] [addons] Saving addon ConfigMap to Kubernetes INFO[0087] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-metrics-addon INFO[0087] [addons] Executing deploy job.. 10.42.3.11 3.17.76.33 <none> <none> onap INFO[0092] [addons] KubeDNS deployed successfully.. onap-dmaap-dmaap-dr-prov-fbf6c94f5-v9bmq INFO[0092] [ingress] Setting up nginx ingress controller INFO[0092] [addons] Saving addon2/2 ConfigMap to Kubernetes INFO[0092] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-ingress-controller INFO[0092] [addons] Executing deploy job..Running 0 29m 10.42.5.10 INFO[0097] [ingress] ingress controller nginx is successfully deployed INFO[0097] [addons] Setting up user addons 18.220.70.253 <none> <none> onap INFO[0097] [addons] Checking for included user addons onap-dmaap-message-router-0 WARN[0097] [addons] Unable to determine if is a file path or url, skipping INFO[0097] [addons] Deploying rke-user-includes-addons INFO[0097] [addons] Saving addon1/1 ConfigMap to Kubernetes INFO[0097] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-user-includes-addons INFO[0097] [addons] Executing deploy job..Running 0 29m 10.42.4.14 WARN[0128] Failed to deploy addon execute job [rke-user-includes-addons]: Failed to get job complete status: <nil> INFO[0128] Finished building Kubernetes cluster successfully ubuntu@a-rke:~$ sudo docker ps CONTAINER ID 18.188.214.137 <none> <none> onap onap-dmaap-message-router-kafka-0 IMAGE 1/1 Running COMMAND 1 29m CREATED 10.42.5.13 18.220.70.253 <none> STATUS <none> onap PORTS onap-dmaap-message-router-kafka-1 NAMES ec26c4bd24b5 846921f0fe0e 1/1 Running "/server" 1 29m 10.42.3.13 minutes ago 3.17.76.33 Up 10 minutes <none> <none> onap k8s_defaultonap-httpdmaap-backend_defaultmessage-httprouter-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0 f8d5db205e14kafka-2 8a7739f672b4 1/1 Running 0 "/sidecar --v=2 --lo…" 10 minutes ago 29m Up 10 minutes.42.4.15 18.188.214.137 <none> <none> onap k8s_sidecar_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0 490461545ae4onap-dmaap-message-router-mirrormaker-8587c4c9cf-lfnd8 0/1 CrashLoopBackOff rancher/metrics-server-amd649 "/metrics-server --s…" 29m 10 minutes ago 10.42.4.7 Up 10 minutes 18.188.214.137 <none> <none> onap k8s_metrics-server_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0 aaf03b62bd41onap-dmaap-message-router-zookeeper-0 6816817d9dce 1/1 Running "/dnsmasq-nanny -v=2…" 10 minutes0 ago Up 10 minutes 29m 10.42.5.14 18.220.70.253 <none> k8s_dnsmasq_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0 58ec007db72f <none> onap 55ffe31ac578 onap-dmaap-message-router-zookeeper-1 "/kube-dns --domain=…"1/1 10 minutes agoRunning Up 10 minutes 0 29m 10.42.4.13 k8s_kubedns_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0 0a95c06f6aa618.188.214.137 <none> e183460c484d <none> onap onap-dmaap-message-router-zookeeper-2 "/cluster-proportion…" 10 minutes ago 1/1 Up 10 minutes Running 0 29m k8s_autoscaler_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0 968a7c99b210 10.42.3.12 rancher/pause-amd64:3.13.17.76.33 <none> "/pause" <none> onap 10 minutes agoonap-nfs-provisioner-nfs-provisioner-57c999dc57-mdcw5 1/1 Up 10 minutesRunning 0 24m k8s_POD_default-http-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0 69969b331e4910.42.3.15 rancher/pause-amd64:3.117.76.33 <none> "/pause" <none> onap 10 minutes agoonap-robot-robot-677bdbb454-zj9jk Up 10 minutes 1/1 Running k8s_POD_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0 baa5f03c16ff 0 rancher/pause-amd64:3.1 24m 10.42.5.16 "/pause" 18.220.70.253 <none> 10 minutes ago <none> onap Up 10 minutes onap-so-so-8569947cbd-jn5x4 k8s_POD_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0 82b2a9f640cb rancher/pause-amd64:3.0/1 Init:0/1 "/pause" 1 13m 10.42.4.19 minutes ago 18.188.214.137 Up 10 minutes<none> <none> onap k8s_POD_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0 953a4d4be0c1 onap-so-so-bpmn-infra-78c8fd665d-b47qn 0/1 df4469c42185Init:0/1 1 13m "/usr/bin/dumb-init …" 10.42.3.16 10 minutes ago 3.17.76.33 Up 10 minutes <none> <none> onap k8s_nginx-ingress-controller_nginx-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0 cce552840749onap-so-so-catalog-db-adapter-565f9767ff-lvbgx rancher/pause-amd64:3.0/1 Init:0/1 "/pause" 1 13m 10 minutes ago.42.3.17 3.17.76.33 Up 10 minutes <none> <none> onap k8s_POD_nginx-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0 baa65f9c6f97onap-so-so-mariadb-config-job-d9sdb f0fad859c9090/1 Init:0/2 0 "/opt/bin/flanneld -…" 3m37s 10.42.3.20 minutes ago 3.17.76.33 Up 10 minutes <none> <none> onap k8s_kube-flannel_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0 1736ce68f41aonap-so-so-mariadb-config-job-rkqdl 9f355e076ea7 0/1 Init:Error "/install-cni.sh" 0 10 minutes ago 13m Up 10 minutes.42.5.19 18.220.70.253 <none> <none> onap k8s_install-cni_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0 615d3f702ee7onap-so-so-monitoring-69b9fdd94c-dks4v 7eca10056c8e 1/1 Running "start_runit" 0 13m 10 minutes ago 10.42.4.18 Up 10 minutes 18.188.214.137 <none> <none> onap k8s_calico-node_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0 1c4a702f0f18onap-so-so-openstack-adapter-5f9cf896d7-mgbdd rancher/pause-amd64:3.0/1 Init:0/1 "/pause" 1 13m 10.42.4.17 minutes ago 18.188.214.137 Up 10 minutes<none> <none> onap k8s_POD_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0 0da1cada08e1onap-so-so-request-db-adapter-5c9bfd7b57-2krnp rancher/hyperkube:v1.11.6-rancher10/1 "/opt/rke-tools/entr…"Init:0/1 10 minutes ago Up1 10 minutes 13m 10.42.3.18 3.17.76.33 kube-proxy<none> 57f44998f34a rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…"<none> onap 11 minutes ago Up 11 minutes onap-so-so-sdc-controller-6fb5cf5775-bsxhm 0/1 Init:0/1 kubelet 50f424c4daec 1 rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 13m 11 minutes ago 10.42.4.20 Up 11 minutes 18.188.214.137 <none> <none> onap kube-scheduler 502d327912d9onap-so-so-sdnc-adapter-8555689c75-r6vkb rancher/hyperkube:v1.11.6-rancher1 "/opt/rke-tools/entr…" 11 minutes ago1/1 Running Up 11 minutes 0 13m 10.42.5.18 kube-controller-manager 9fc706bbf3a5 18.220.70.253 rancher/hyperkube:v1.11.6-rancher1<none> "/opt/rke-tools/entr…" 11 minutes ago <none> onap Up 11 minutes onap-so-so-vfc-adapter-68fccc8bb8-c56t2 0/1 kube-apiserver 2e7630c2047c Init:0/1 rancher/coreos-etcd:v3.2.18 1 "/usr/local/bin/etcd…" 11 minutes ago13m 10.42.4.21 Up 11 minutes 18.188.214.137 <none> <none> onap etcd fef566337eb6 onap-so-so-vnfm-adapter-65c4c5944b-72nlf rancher/rke-tools:v0.1.15 "/opt/rke-tools/rke-…" 1/1 26 minutes ago Running Up 26 minutes 0 13m 10.42.3.19 etcd-rolling-snapshots |
DI 20190226-1: RKE up segmentation fault on 0.1.16 - use correct user
Code Block | ||
---|---|---|
| ||
amdocs@obriensystemsu0:~$ sudo rke up Segmentation fault (core dumped) # issue was I was using ubuntu as the yml user not amdocs in this case for a particular VM3.17.76.33 <none> <none> # on worker nodes only # nfs client |
DI 20190507: ARM support using RKE 0.2.1 ARM friendly install
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
a1.4xlarge $0.408
ami-0b9bd0b532ebcf4c9
Notes
Pre-RKE installation details in Cloud Native Deployment
...