Introduction
In Casablanca release, MSB project is integrating Istio Service Mesh with ONAP to manage ONAP microservices. Istio Service Mesh is a dedicated infrastructure layer to connect, manage and secure microservices, which brings the below benefits:
- Stability and Reliability: Reliable communication with retries and circuit breaker
- Security: Secured communication with TLS
- Performance: Latency aware load balancing with warm cache
- Observability: Metrics measurement and distributed tracing without instrumenting application
- Manageability: Routing rule and rate limiting enforcement
- Testability: Fault injection to test resilience of the services
Installation
Currently, the installation scripts are in Github, they will be moved to ONAP Gerrit once the requested repo is created.
Download installation scripts with git clone:
git clone https://github.com/zhaohuabing/istio-install-scripts.git
Kubernetes Master
We need Kubernetes1.9 or newer to enable automatic sidecar injection, so we don't have to modify every individual ONAP kubernetes yaml deployment files to add the sidecar container, which would be inconvenient.
Istio leverages the webhook feature of Kubernetes to automatically inject an Envoy sidecar to each Pod. Kubernetes API server will call the Istio sidecar injection webhook when it receives a request to create a Pod resource, the webhook adds an Envoy sidecar container to the Pod, then the modified Pod resource is stored into etcd.
Webhook and other needed features have already been configured in the install scripts.
Create the Kubernetes master by running this script:
cd istio-install-scripts ./1_install_k8s_master.sh
This script will create a Kubernetes master node with Kubeadm and install calico network plugin. Some other needed tools such as Docker, Kubectl and Helm will also be installed.
From the output of the script, you should see a command on how to join a node to the created Kubernets cluster. Note that this is an example, the token and cert-hash of your installation will be different, please copy & paste the command to somewhere, we will need it later.
You can now join any number of machines by running the following on each node as root: kubeadm join 10.12.5.104:6443 --token 1x62yf.60ys5p2iw13tx2t8 --discovery-token-ca-cert-hash sha256:f06628c7cee002b262e69f3f9efadf47bdec125e19606ebff743a3e514a8383b
Kubernetes worker Node
Log in the worker node machine, run this script to create a kubernetes worker node:
./2_install_k8s_minion.sh
You can now join this machines by running "kubeadmin join" command as root:
sudo kubeadm join 10.12.5.104:6443 --token 1x62yf.60ys5p2iw13tx2t8 --discovery-token-ca-cert-hash sha256:f06628c7cee002b262e69f3f9efadf47bdec125e19606ebff743a3e514a8383b
Please note that this is just an example, please refer to the output of the "kubeamin init" when creating the k8s master for the exact command to use in your k8s cluster.
If you would like to get kubectl talk to your k8s master, you need to copy the administrator kubeconfig file from your master to your workstation like this:
scp root@<master ip>:/etc/kubernetes/admin.conf . kubectl --kubeconfig ./admin.conf get nodes
or you can manually copy the content of this file to ~/.kube/conf if scp can't be used due to security reason.
Istio Control Plane
Install Istio by running this script:
./ 3_install_istio.sh
This script installs the followings Istio components:
- Install Istioctl command line tool in the /usr/bin directory
- Install Istio control plane components, including Pilot, Citadel, Mixer
- Install addons including servicegraph, Promeheus, Grafana, jaeger
Confirm Istio was installed:
kubectl get svc -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana NodePort 10.109.190.71 <none> 3000:30300/TCP 20m istio-citadel ClusterIP 10.106.185.181 <none> 8060/TCP,9093/TCP 20m istio-egressgateway ClusterIP 10.102.224.133 <none> 80/TCP,443/TCP 20m istio-ingressgateway LoadBalancer 10.100.168.32 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP 20m istio-pilot ClusterIP 10.101.64.153 <none> 15003/TCP,15005/TCP,15007/TCP,15010/TCP,15011/TCP,8080/TCP,9093/TCP 20m istio-policy ClusterIP 10.104.11.162 <none> 9091/TCP,15004/TCP,9093/TCP 20m istio-sidecar-injector ClusterIP 10.100.229.40 <none> 443/TCP 20m istio-statsd-prom-bridge ClusterIP 10.107.27.91 <none> 9102/TCP,9125/UDP 20m istio-telemetry ClusterIP 10.101.153.114 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 20m prometheus ClusterIP 10.103.0.205 <none> 9090/TCP 20m servicegraph NodePort 10.106.49.168 <none> 8088:30088/TCP 20m tracing LoadBalancer 10.100.158.236 <pending> 80:30188/TCP 20m zipkin NodePort 10.96.164.255 <none> 9411:30411/TCP 20m
Sidecar Injection
In the transition phase, the Istio sidecar injector policy is configured as "disabled" when installing Istio. So the sidecar injector will not inject the sidecar into pods by default. Add the `sidecar.istio.io/inject annotation` with value `true` to the pod template spec to enable injection.
Note: when all ONAP projects are ready for Istio integration, the Istio sidecar injector policy could be configured as "enabled", the annotation in the pod will not be necessary any more.
Enable Istio sidecar injection webhook.
kubectl create namespace onap kubectl label namespace onap istio-injection=enabled
Confirm that auto sidecar injection has been enabled on onap namespace.
kubectl get namespace -L istio-injection NAME STATUS AGE ISTIO-INJECTION default Active 20m istio-system Active 10m kube-public Active 20m kube-system Active 20m onap Active 8s enabled
Start a local helm repository server and add it to helm repository list:
helm serve & helm repo add local http://127.0.0.1:8879
Download OOM Gerrit repository and build the helm charts.
git clone -b beijing http://gerrit.onap.org/r/oom cd oom/kubernetes make all
Confirm that ONAP charts have been successfully created.
helm search onap NAME CHART VERSION APP VERSION DESCRIPTION local/onap 2.0.0 beijing Open Network Automation Platform (ONAP) local/aaf 2.0.0 ONAP Application Authorization Framework local/aai 2.0.0 ONAP Active and Available Inventory local/clamp 2.0.0 ONAP Clamp local/cli 2.0.0 ONAP Command Line Interface local/consul 2.0.0 ONAP Consul Agent local/dcaegen2 2.0.0 ONAP DCAE Gen2 local/dmaap 2.0.0 ONAP DMaaP components local/esr 2.0.0 ONAP External System Register local/log 2.0.0 ONAP Logging ElasticStack local/msb 2.0.0 ONAP MicroServices Bus local/multicloud 2.0.0 ONAP multicloud broker local/nbi 2.0.0 ONAP Northbound Interface local/oof 2.0.0 ONAP Optimization Framework local/policy 2.0.0 ONAP Policy Administration Point local/portal 2.0.0 ONAP Web Portal local/postgres 2.0.0 ONAP Postgres Server local/robot 2.0.0 A helm Chart for kubernetes-ONAP Robot local/sdnc-prom 2.0.0 ONAP SDNC Policy Driven Ownership Management local/sniro-emulator 2.0.0 ONAP Mock Sniro Emulator local/so 2.0.0 ONAP Service Orchestrator local/uui 2.0.0 ONAP uui local/vfc 2.0.0 ONAP Virtual Function Controller (VF-C) local/vid 2.0.0 ONAP Virtual Infrastructure Deployment local/vnfsdk 2.0.0 ONAP VNF SDK
Install local/onap chart. Local/onap chart will do some initialization setup which is needed for onap components, such as creating service accounts.
cd oom/kubernetes helm install local/onap -n common --namespace onap -f onap/resources/environments/disable-allcharts.yaml
In Casablanca, MSB project is working with VF-C and MultiCloud to verify Istio integration, so we are focusing on these three projects right now. More projects will engage later.
helm install local/msb -n msb --namespace onap helm install local/vfc -n vfc --namespace onap helm install local/multicloud -n multicloud --namespace onap