Table of Contents |
---|
Introduction
In Casablanca release, MSB project is integrating Istio Service Mesh with ONAP to manage ONAP microservices. Istio Service Mesh is a dedicated infrastructure layer to connect, manage and secure microservices, which brings the below benefits:
- Stability and Reliability: Reliable communication with retries and circuit breaker
- Security: Secured communication with TLS
- Performance: Latency aware load balancing with warm cache
- Observability: Metrics measurement and distributed tracing without instrumenting application
- Manageability: Routing rule and rate limiting enforcement
- Testability: Fault injection to test resilience of the services
Installation
Currently, the installation scripts are in Github, they will be moved to ONAP Gerrit once the requested repo is created.
Download installation scripts with git clone:
Code Block | ||||
---|---|---|---|---|
| ||||
git clone https://github.com/zhaohuabing/istio-install-scripts.git |
Kubernetes Master
We need Kubernetes1.9 or newer to enable automatic sidecar injection, so we don't have to modify every individual ONAP kubernetes yaml deployment files to add the sidecar container, which would be inconvenient.
Istio leverages the webhook feature of Kubernetes to automatically inject an Envoy sidecar to each Pod. Kubernetes API server will call the Istio sidecar injection webhook when it receives a request to create a Pod resource, the webhook adds an Envoy sidecar container to the Pod, then the modified Pod resource is stored into etcd.
Webhook and other needed features have already been configured in the install scripts.
Create the Kubernetes master by running this script:
Code Block | ||||
---|---|---|---|---|
| ||||
cd istio-install-scripts
./1_install_k8s_master.sh |
This script will create a Kubernetes master node with Kubeadm and install calico network plugin. Some other needed tools such as Docker, Kubectl and Helm will also be installed.
From the output of the script, you should see a command on how to join a node to the created Kubernets cluster. Note that this is an example, the token and cert-hash of your installation will be different, please copy & paste the command to somewhere, we will need it later.
Code Block | ||||
---|---|---|---|---|
| ||||
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.12.5.104:6443 --token 1x62yf.60ys5p2iw13tx2t8 --discovery-token-ca-cert-hash sha256:f06628c7cee002b262e69f3f9efadf47bdec125e19606ebff743a3e514a8383b |
Kubernetes worker Node
Log in the worker node machine, run this script to create a kubernetes worker node:
Code Block | ||||
---|---|---|---|---|
| ||||
./2_install_k8s_minion.sh |
You can now join this machines by running "kubeadmin join" command as root:
Code Block | ||||
---|---|---|---|---|
| ||||
sudo kubeadm join 10.12.5.104:6443 --token 1x62yf.60ys5p2iw13tx2t8 --discovery-token-ca-cert-hash sha256:f06628c7cee002b262e69f3f9efadf47bdec125e19606ebff743a3e514a8383b |
Please note that this is just an example, please refer to the output of the "kubeamin init" when creating the k8s master for the exact command to use in your k8s cluster.
If you would like to get kubectl talk to your k8s master, you need to copy the administrator kubeconfig file from your master to your workstation like this:
Code Block | ||||
---|---|---|---|---|
| ||||
scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes |
or you can manually copy the content of this file to ~/.kube/conf if scp can't be used due to security reason.
Istio Control Plane
Install Istio by running this script:
Code Block | ||||
---|---|---|---|---|
| ||||
./ 3_install_istio.sh |
This script installs the followings Istio components:
- Install Istioctl command line tool in the /usr/bin directory
- Install Istio control plane components, including Pilot, Citadel, Mixer
- Install addons including servicegraph, Promeheus, Grafana, jaeger
Confirm Istio was installed:
...
language | bash |
---|---|
theme | RDark |
...
Table of Contents |
---|
Introduction
In Casablanca release, MSB project is integrating Istio Service Mesh with ONAP to manage ONAP microservices. Istio Service Mesh is a dedicated infrastructure layer to connect, manage and secure microservices, which brings the below benefits:
- Stability and Reliability: Reliable communication with retries and circuit breaker
- Security: Secured communication with TLS
- Performance: Latency aware load balancing with warm cache
- Observability: Metrics measurement and distributed tracing without instrumenting application
- Manageability: Routing rule and rate limiting enforcement
- Testability: Fault injection to test resilience of the services
Installation
Download installation scripts from ONAP Gerrit:
Code Block | ||||
---|---|---|---|---|
| ||||
git clone https://gerrit.onap.org/r/msb/service-mesh |
Kubernetes Master
We need Kubernetes1.9 or newer to enable automatic sidecar injection, so we don't have to modify every individual ONAP kubernetes yaml deployment files to add the sidecar container, which would be inconvenient.
Istio leverages the webhook feature of Kubernetes to automatically inject an Envoy sidecar to each Pod. Kubernetes API server will call the Istio sidecar injection webhook when it receives a request to create a Pod resource, the webhook adds an Envoy sidecar container to the Pod, then the modified Pod resource is stored into etcd.
Webhook and other needed features have already been configured in the install scripts to enable Istio sidecar injection.
Create the Kubernetes master by running this script:
Code Block | ||||
---|---|---|---|---|
| ||||
cd service-mesh/install/
./1_install_k8s_master.sh |
This script will create a Kubernetes master node with Kubeadm and install calico network plugin. Some other needed tools such as Docker, Kubectl and Helm will be installed as well.
From the output of the script, you should see a command on how to join a node to the created Kubernets cluster. Note that this is an example, the token and cert-hash of your installation will be different, please copy & paste the command to somewhere, we will need it later.
Code Block | ||||
---|---|---|---|---|
| ||||
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.12.5.104:6443 --token 1x62yf.60ys5p2iw13tx2t8 --discovery-token-ca-cert-hash sha256:f06628c7cee002b262e69f3f9efadf47bdec125e19606ebff743a3e514a8383b |
Kubernetes worker Node
Log in the worker node machine, run this script to create a kubernetes worker node:
Code Block | ||||
---|---|---|---|---|
| ||||
./2_install_k8s_minion.sh |
You can now join this machines by running "kubeadmin join" command as root:
Code Block | ||||
---|---|---|---|---|
| ||||
sudo kubeadm join 10.12.5.104:6443 --token 1x62yf.60ys5p2iw13tx2t8 --discovery-token-ca-cert-hash sha256:f06628c7cee002b262e69f3f9efadf47bdec125e19606ebff743a3e514a8383b |
Please note that this is just an example, please refer to the output of the "kubeamin init" when creating the k8s master for the exact command to use in your k8s cluster.
If you would like to get kubectl talk to your k8s master, you need to copy the administrator kubeconfig file from your master to your workstation like this:
Code Block | ||||
---|---|---|---|---|
| ||||
scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes |
or you can manually copy the content of this file to ~/.kube/conf if scp can't be used due to security reason.
Istio Control Plane
Install Istio by running this script:
Code Block | ||||
---|---|---|---|---|
| ||||
./ 3_install_istio.sh |
This script installs the followings Istio components:
- Install Istioctl command line tool in the /usr/bin directory
- Install Istio control plane components, including Pilot, Citadel, Mixer
- Install addons including servicegraph, Promeheus, Grafana, jaeger
Confirm Istio was installed:
Code Block | ||||
---|---|---|---|---|
| ||||
kubectl get svc -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana NodePort 10.109.190.71 <none> 3000:30300/TCP 20m istio-citadel ClusterIP 10.106.185.181 <none> 8060/TCP,9093/TCP 20m istio-egressgateway ClusterIP 10.102.224.133 <none> 80/TCP,443/TCP AGE grafana 20m istio-ingressgateway LoadBalancer 10.100.168.32 <pending> NodePort 80:31380/TCP,443:31390/TCP,31400:31400/TCP 10.109.190.71 <none> 3000:30300/TCP 20m istio-pilot ClusterIP 10.101.64.153 <none> 15003/TCP,15005/TCP,15007/TCP,15010/TCP,15011/TCP,8080/TCP,9093/TCP 20m istio-citadelpolicy ClusterIP 10.106104.18511.181162 <none> 80609091/TCP,15004/TCP,9093/TCP 20m istio-egressgateway sidecar-injector ClusterIP 10.102100.224229.13340 <none> 80/TCP,443/TCP 20m istio-ingressgateway LoadBalancer 10.100.168.32 <pending> 20m 80:31380/TCP,443:31390/TCP,31400:31400/TCP istio-statsd-prom-bridge ClusterIP 10.107.27.91 <none> 9102/TCP,9125/UDP 20m istio-pilot ClusterIP 10.101.64.153 <none> 15003/TCP,15005/TCP,15007/TCP,15010/TCP,15011/TCP,8080/TCP,9093/TCP 20m istio-policy telemetry ClusterIP 10.104101.11153.162 114 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 20m prometheus 20m istio-sidecar-injector ClusterIP 10.100103.2290.40205 <none> 4439090/TCP 20m servicegraph 20m istio-statsd-prom-bridge ClusterIPNodePort 10.107106.2749.91 168 <none> 91028088:30088/TCP,9125/UDP 20m istio-telemetry tracing ClusterIP LoadBalancer 10.101100.153158.114236 <none><pending> 9091/TCP,15004/TCP,9093/TCP,42422/TCP80:30188/TCP 20m prometheus ClusterIP 10.103.0.205 20m zipkin <none> 9090/TCP NodePort 10.96.164.255 <none> 9411:30411/TCP 20m servicegraph NodePort 10.106.49.168 <none> 8088:30088/TCP 20m tracing LoadBalancer 10.100.158.236 <pending> 80:30188/TCP 20m |
Sidecar Injection
In the transition phase, the Istio sidecar injector policy is configured as "disabled" when installing Istio. So the sidecar injector will not inject the sidecar into pods by default. Add the `sidecar.istio.io/inject annotation` with value `true` to the pod template spec to enable injection.
Example:
Code Block | ||||
---|---|---|---|---|
| ||||
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: {{ include "common.fullname" . }} namespace: {{ include "common.namespace" . }} labels: app: {{ include "common.name" . }} chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: multicloud-vio template: metadata: labels: app: {{ include "common.name" . }} release: {{ .Release.Name }} name: {{ include "common.name" . }} 20m zipkin annotations: NodePort 10.96.164.255 <none> 9411:30411/TCP sidecar.istio.io/inject: "{{.Values.istioSidecar}}" |
Note: when all ONAP projects are ready for Istio integration, the Istio sidecar injector policy could be configured as "enabled", then the annotation in the pod will not be necessary any more.
Enable Istio sidecar injection webhook.
Code Block | ||||
---|---|---|---|---|
| ||||
kubectl create namespace onap
kubectl label namespace onap istio-injection=enabled |
Confirm that auto sidecar injection has been enabled on onap namespace.
Code Block | ||||
---|---|---|---|---|
| ||||
kubectl get namespace -L istio-injection NAME STATUS AGE ISTIO-INJECTION default Active 20m istio-system Active 10m kube-public Active 20m |
Sidecar Injection
In the transition phase, the Istio sidecar injector policy is configured as "disabled" when installing Istio. So the sidecar injector will not inject the sidecar into pods by default. Add the `sidecar.istio.io/inject annotation` with value `true` to the pod template spec to enable injection.
Note: when all ONAP projects are ready for Istio integration, the Istio sidecar injector policy could be configured as "enabled", the annotation in the pod will not be necessary any more.
...
20m
kube-system Active 20m
onap Active 8s enabled |
Start a local helm repository server and add it to helm repository list:
Code Block | ||||
---|---|---|---|---|
| ||||
kubectlhelm createserve namespace& onaphelm kubectlrepo labeladd namespace onap istio-injection=enabled |
...
local http://127.0.0.1:8879 |
Download OOM Gerrit repository and build the helm charts.
Code Block | ||||
---|---|---|---|---|
| ||||
kubectlgit get namespaceclone -Lb istio-injection NAME STATUS AGE ISTIO-INJECTION default Active 20m istio-system Active 10m kube-public Active 20m kube-system Active 20m beijing http://gerrit.onap.org/r/oom cd oom/kubernetes make all |
Confirm that ONAP charts have been successfully created.
Code Block | ||||
---|---|---|---|---|
| ||||
helm search onap NAME CHART VERSION APP VERSION DESCRIPTION local/onap Active 2.0.0 8s enabled beijing |
Start a local helm repository server and add it to helm repository list:
Code Block | ||||
---|---|---|---|---|
| ||||
helm serve & helm repo add local http://127.0.0.1:8879 |
Download OOM Gerrit repository and build the helm charts.
Code Block | ||||
---|---|---|---|---|
| ||||
git clone -b beijing http://gerrit.onap.org/r/oom
cd oom/kubernetes
make all |
Confirm that ONAP charts have been successfully created.
Code Block | ||||
---|---|---|---|---|
| ||||
helm search onap NAMEOpen Network Automation Platform (ONAP) local/aaf 2.0.0 CHART VERSION APP VERSION ONAP Application Authorization DESCRIPTIONFramework local/onapaai 2.0.0 beijing Open Network Automation Platform (ONAP)ONAP Active and Available Inventory local/aafclamp 2.0.0 ONAP ApplicationClamp Authorization Framework local/aaicli 2.0.0 ONAP ActiveCommand andLine AvailableInterface Inventory local/clampconsul 2.0.0 ONAP Clamp local/cli ONAP Consul Agent local/dcaegen2 2.0.0 ONAP CommandDCAE Line InterfaceGen2 local/consuldmaap 2.0.0 ONAP ConsulDMaaP Agentcomponents local/dcaegen2esr 2.0.0 ONAP External DCAESystem Gen2Register local/dmaaplog 2.0.0 ONAP DMaaPLogging componentsElasticStack local/esrmsb 2.0.0 ONAP ExternalMicroServices System RegisterBus local/log multicloud 2.0.0 ONAP Loggingmulticloud ElasticStackbroker local/msbnbi 2.0.0 ONAP MicroServices Bus local/multicloudNorthbound Interface local/oof 2.0.0 ONAP multicloudOptimization brokerFramework local/nbi policy 2.0.0 ONAP Policy NorthboundAdministration InterfacePoint local/oofportal 2.0.0 ONAP OptimizationWeb FrameworkPortal local/policy postgres 2.0.0 ONAP PolicyPostgres AdministrationServer Point local/portalrobot 2.0.0 A helm Chart for kubernetes-ONAP Web PortalRobot local/postgressdnc-prom 2.0.0 ONAP PostgresSDNC ServerPolicy local/robotDriven Ownership Management local/sniro-emulator 2.0.0 ONAP Mock Sniro Emulator local/so A helm Chart for kubernetes-ONAP Robot local/sdnc-prom 2.0.0 ONAP SDNC Policy Driven Ownership Management local/sniro-emulator Service Orchestrator local/uui 2.0.0 ONAP Mock Sniro Emulatoruui local/sovfc 2.0.0 ONAP Service OrchestratorVirtual Function Controller (VF-C) local/uuivid 2.0.0 ONAP uuiVirtual Infrastructure Deployment local/vfc vnfsdk 2.0.0 ONAP Virtual Function Controller (VF-C) local/vid 2.0.0 ONAP Virtual Infrastructure Deployment local/vnfsdk 2.0.0 ONAP VNF SDK |
Install local/onap chart. Local/onap chart will do some initialization setup which is needed for onap components, such as creating service accounts.
Code Block | ||||
---|---|---|---|---|
| ||||
cd oom/kubernetes
helm install local/onap -n common --namespace onap -f onap/resources/environments/disable-allcharts.yaml |
In Casablanca, MSB project is working with VF-C and MultiCloud to verify Istio integration, so we are focusing on these three projects right now. More projects will engage later.
Code Block | ||||
---|---|---|---|---|
| ||||
helm install local/msb -n msb --namespace onap
helm install local/vfc -n vfc --namespace onap
helm install local/multicloud -n multicloud --namespace onap |
Confirm that ONAP microservices have been started
Code Block | ||||
---|---|---|---|---|
| ||||
kubectl get all -n onap NAME READY STATUS RESTARTS AGE pod/msb-kube2msb-77ccb675dd-rhfn7 1/1 Running 0 3h pod/msb-msb-consul-646987f5cf-qms5vVNF SDK |
Install local/onap chart. Local/onap chart will do some initialization setup which is needed for onap components, such as creating service accounts.
Code Block | ||||
---|---|---|---|---|
| ||||
cd oom/kubernetes
helm install local/onap -n common --namespace onap -f onap/resources/environments/disable-allcharts.yaml |
In Casablanca, MSB project is working with VF-C and MultiCloud as pilot projects, we would like to roll out it to the other ONAP projects after verifying the integration and Istio features.
Code Block | ||||
---|---|---|---|---|
| ||||
helm install local/msb -n msb --namespace onap
helm install local/vfc -n vfc --namespace onap
helm install local/multicloud -n multicloud --namespace onap |
Note that you can also install other ONAP projects with helm install if they are needed. But Istio sidecar will not be injected to their Pods by default.
Confirm that ONAP microservices have been started
Code Block | ||||
---|---|---|---|---|
| ||||
kubectl get all -n onap NAME READY STATUS RESTARTS AGE pod/msb-kube2msb-77ccb675dd-rhfn7 1/1 Running 0 3h pod/msb-msb-consul-646987f5cf-qms5v 2/2 Running 0 3h pod/msb-msb-discovery-7647f6476f-cl6xw 3/3 Running 0 3h pod/msb-msb-eag-d678c65d6-fmfn6 3/3 Running 0 3h pod/msb-msb-iag-647d5f998c-dc766 3/3 Running 0 3h pod/multicloud-multicloud-5679bd9876-tzxzw 2/2 Running 0 1h pod/multicloud-multicloud-ocata-774579596-f7smf 3/3 Running 0 1h pod/multicloud-multicloud-vio-8c7dbc8d5-lfcw6 3/3 Running 0 1h pod/multicloud-multicloud-windriver-85b595675d-5vx45 3/3 Running 0 1h pod/vfc-vfc-catalog-79764dfd8f-rkx6f 2/2 Running 1 2d pod/vfc-vfc-ems-driver-75bc68b946-6r6r6 1/1 Running 1 2d pod/vfc-vfc-generic-vnfm-driver-69bf778bfd-pscjn 2/2 Running 0 2d pod/vfc-vfc-huawei-vnfm-driver-8574569f4c-8jwc4 2/2 Running 1 2d pod/vfc-vfc-juju-vnfm-driver-6dfd876bb8-bh7dq 2/2 Running 0 3h2d pod/msbvfc-vfc-msbmultivim-discoveryproxy-7647f6476f58c7bd47dc-cl6xw7qdtd 31/31 Running 0 3h2d pod/msb-msb-eag-d678c65d6-fmfn6 3/3vfc-vfc-nokia-v2vnfm-driver-7b77c469bd-krfrw 1/1 Running 0 3h2d pod/msb-msb-iag-647d5f998c-dc766 3/3vfc-vfc-nokia-vnfm-driver-98fbdb5b5-p9zqw 2/2 Running 0 3h2d pod/multicloudvfc-vfc-multicloudnslcm-5679bd987674956bb876-tzxzwv9kbt 2/2 Running 0 1h2d pod/multicloudvfc-multicloudvfc-ocataresmgr-77457959657dc4c98b5-f7smfdzp7f 32/32 Running 0 1h2d pod/multicloudvfc-multicloudvfc-viovnflcm-8c7dbc8d56f9dc7df44-lfcw6hncf4 32/32 Running 01 1h2d pod/multicloudvfc-multicloudvfc-windrivervnfmgr-85b595675d5585c688c6-5vx457qrnp 32/32 Running 0 1h2d pod/vfc-vfc-catalogvnfres-79764dfd8f54bc985599-rkx6f9zkqn 2/2 Running 10 2d pod/vfc-vfc-emsworkflow-driver6db56f95b9-75bc68b946-6r6r6np8tg 1/1 Running 1 2d pod/vfc-vfc-genericworkflow-vnfmengine-driver-69bf778bfd7fb49fd974-pscjnkcb8q 21/21 Running 01 2d pod/vfc-vfc-huaweizte-vnfmsdnc-driver-8574569f4c585d449797-8jwc487nhp 21/21 Running 10 2d pod/vfc-vfc-jujuzte-vnfm-driver-6dfd876bb859d4756fbc-bh7dqrpn9v 2/2 Running 0 2d pod/vfc-vfc-multivim-proxy-58c7bd47dc-7qdtd 1/1 Running 0 2d pod/vfc-vfc-nokia-v2vnfm-driver-7b77c469bd-krfrw 1/1 Running 0 2d pod/vfc-vfc-nokia-vnfm-driver-98fbdb5b5-p9zqw 2/2 Running 0 2d pod/vfc-vfc-nslcm-74956bb876-v9kbt 2/2 Running 0 2d pod/vfc-vfc-resmgr-57dc4c98b5-dzp7f 2/2 Running 0 2d pod/vfc-vfc-vnflcm-6f9dc7df44-hncf4 2/2 Running 1 2d pod/vfc-vfc-vnfmgr-5585c688c6-7qrnp 2/2 Running 0 2d pod/vfc-vfc-vnfres-54bc985599-9zkqn 2/2 Running 0 2d pod/vfc-vfc-workflow-6db56f95b9-np8tg 1/1 Running 1 2d pod/vfc-vfc-workflow-engine-7fb49fd974-kcb8q 1/1 Running 1 2d pod/vfc-vfc-zte-sdnc-driver-585d449797-87nhp 1/1 Running 0 2d pod/vfc-vfc-zte-vnfm-driver-59d4756fbc-rpn9v 2/2 Running 0 2d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/msb-consul NodePort 10.96.255.198 <none> 8500:30285/TCP 3h service/msb-discovery NodePort 10.105.163.81 <none> 10081:30281/TCP 3h service/msb-eag NodePort 10.100.221.66 <none> 80:30282/TCP,443:30284/TCP 3h service/msb-iag NodePort 10.96.179.117 <none> 80:30280/TCP,443:30283/TCP 3h service/multicloud NodePort 10.102.72.237 <none> 9001:30291/TCP 1h service/multicloud-ocata NodePort 10.99.131.129 <none> 9006:30293/TCP 1h service/multicloud-vio NodePort 10.111.175.58 <none> 9004:30292/TCP 1h service/multicloud-windriver NodePort 10.110.92.61 <none> 9005:30294/TCP 1h service/vfc-catalog ClusterIP 10.99.98.115 <none> 8806 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/msb-consul NodePort 10.96.255.198 <none> 8500:30285/TCP 3h service/msb-discovery NodePort 10.105.163.81 <none> 10081:30281/TCP 3h service/msb-eag NodePort 10.100.221.66 <none> 80:30282/TCP,443:30284/TCP 3h service/msb-iag NodePort 10.96.179.117 <none> 80:30280/TCP,443:30283/TCP 3h service/multicloud NodePort 10.102.72.237 <none> 9001:30291/TCP 1h service/multicloud-ocata NodePort 10.99.131.129 <none> 9006:30293/TCP 1h service/multicloud-vio NodePort 10.111.175.58 <none> 9004:30292/TCP 1h service/multicloud-windriver NodePort 10.110.92.61 <none> 9005:30294/TCP 1h service/vfc-catalog ClusterIP 10.99.98.115 <none> 8806/TCP 2d service/vfc-ems-driver ClusterIP 10.96.189.14 <none> 8206/TCP 2d service/vfc-generic-vnfm-driver ClusterIP 10.109.48.184 <none> 8484/TCP 2d service/vfc-huawei-vnfm-driver ClusterIP 10.104.208.38 <none> 8482/TCP,8483/TCP 2d service/vfc-juju-vnfm-driver ClusterIP 10.96.182.14 <none> 8483/TCP 2d service/vfc-multivim-proxy ClusterIP 10.107.106.216 <none> 8481/TCP 2d service/vfc-nokia-v2vnfm-driver ClusterIP 10.107.12.32 <none> 8089/TCP 2d service/vfc-nokia-vnfm-driver ClusterIP 10.102.179.150 <none> 8486/TCP 2d service/vfc-nslcm ClusterIP 10.106.43.164 <none> 8403/TCP 2d service/vfc-resmgr ClusterIP 10.98.174.184 <none> 8480/TCP 2d service/vfc-vnflcm ClusterIP 10.108.132.123 <none> 8801/TCP 2d service/vfc-vnfmgr ClusterIP 10.108.59.102 <none> 8803/TCP 2d service/vfc-vnfres ClusterIP 10.111.85.161 <none> 8802/TCP 2d service/vfc-ems-driverworkflow ClusterIP 10.9697.189184.14206 <none> 820610550/TCP 2d service/vfc-genericworkflow-vnfm-driverengine ClusterIP 10.109.48175.18461 <none> 84848080/TCP 2d2 service/vfc-huaweizte-vnfmsdnc-driver ClusterIP 10.104103.20894.38142 <none> 84828411/TCP,8483/TCP 2d service/vfc-jujuzte-vnfm-driver ClusterIP 10.96108.182146.14237 <none> 84838410/TCP 2d service/vfc-multivim-proxy ClusterIP 10.107.106.216 <none> 8481/TCP 2d service/vfc-nokia-v2vnfm-driver ClusterIP 10.107.12.32 <none> 8089/TCP 2d service/vfc-nokia-vnfm-driver ClusterIP 10.102.179.150 <none> 8486/TCP 2d service/vfc-nslcm ClusterIP 10.106.43.164 <none> 8403/TCP 2d service/vfc-resmgr ClusterIP 10.98.174.184 <none> 8480/TCP 2d service/vfc-vnflcm ClusterIP 10.108.132.123 <none> 8801/TCP 2d service/vfc-vnfmgr ClusterIP 10.108.59.102 <none> 8803/TCP 2d service/vfc-vnfres ClusterIP 10.111.85.161 <none> 8802/TCP 2d service/vfc-workflow ClusterIP 10.97.184.206 <none> 10550/TCP 2d service/vfc-workflow-engine ClusterIP 10.109.175.61 <none> 8080/TCP 2 service/vfc-zte-sdnc-driver ClusterIP 10.103.94.142 <none> 8411/TCP 2d service/vfc-zte-vnfm-driver ClusterIP 10.108.146.237 <none> 8410/TCP 2d |
You can open the msb portal http://Node_IP:30280/iui/microservices/default.html in the browser to see all the registered services.
...
You can open the MSB portal http://Node_IP:30280/iui/microservices/default.html in the browser to see all the registered services.
Explore Istio Features
Distributed Tracing
First, let's generate some traffics in the application, access the following URLs with curl command or open them in the browser
http://node_ip:30280/api/multicloud/v0/swagger.json
http://node_ip:30280/api/multicloud-vio/v0/swagger.json
http://node_ip:30280/api/multicloud-ocata/v0/swagger.json
Then open your browser at http://tracing_node_ip:tracing_node_port/, you should see something similar to the following:
Note
- Tracing_node_port can be found by 'kubctl get svc -n istio-system'.
- ONAP microservices need to propagate the appropriate HTTP headers so that when the proxies send span information, the spans can be correlated correctly into a single trace.
Service Graph
Istio provides a Servicegraph service which generates and visualizes graph representations of the services in the mesh.
Open your browser at http://node_ip:30088/dotviz or http://node_ip:30088/force/forcegraph.html, you should see the service graph:
Metrics Visualization
Istio automatically gathers telemetry for services in a mesh. A Prometheus adapter is plugged into Mixer to serve the generated metric data. A Grafana addon is pre-configured with a Prometheus data source and has an Istio dashboard installed for the metric visualization.
Open your browser at http://node_ip:30300, you should see the Grafana Istio dashboard: