Background
L7 Proxy Service Mesh Controller intends to provide connectivity, shape the traffic, apply policies, RBAC and provide
mutual TLS for applications/microservices running across clusters (with service mesh), within the cluster
and with external applications. The functionalities are subjected to the usage of underlying service mesh technology.
Design Overview
Traffic Controller Design Internals
Internal Implementation Details
NOTE - Current implementation will support the ISTIO service mesh technology and SD-WAN load balancer and ExternalDNS as DNS provider. The plugin architecture of the controller makes it extensible to work with any Service mesh technology and any external load balancer as well. It is also designed to configure and communicate with external DNS servers.
JIRA
Elements of Traffic Controller with ISTIO as the service mesh
- Gateways - The inbound/outbound access for the service mesh. It is an envoy service
- VirtualServices - To expose the service outside the service mesh
- DestinationRule - To apply rules for the traffic flow
- AuthorizationPolicy - Authorization for service access
- serviceEntry - add an external service into the mesh
- Authentication Policy - Authenticate external communication
These are the Kubernetes resources generated per cluster. There will be multiple of these resources depending on the intent
API
RESTful North API (with examples)
Types | Intent APIs | Functionality |
---|---|---|
| /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/us-to-us-intents/ | communication between microservices deployed between two clusters |
2. external outbound service communication | /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/inbound-intent/ | communication from external service to internal micro service |
4. external inbound service communiation | /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/outbound-intent/ | communication from internal service to access external service |
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set POST BODY: { "name": "john", "description": "Traffic intent groups" "set":[ { "inbound":"abc" }, { "outbound":"abc" } ] }
1. Micro-service communication intents (Inter/Intra) - Edit the intent to have inbound services to a target service than the outbound services - check the API level access! - implement for all APIS!
POST
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/us-to-us-intents/ POST BODY: { "metadata": { "name": "<name>" // unique name for each intent "description": "connectivity intent for stateless micro-service to stateless micro-service communication" "userdata1": <>, "userdata2": <> } "spec": { // update the memory allocation for each field as per OpenAPI standards "application": "<app1>", "servicename": "<name>" //actual name of the client service "protocol": "<>", // HTTP, HTTPS, TCP and UDP "headless": "false", // default is false. Option "True" "mutualTLS": "<>", // Support 3 modes SIMPLE and ISTIO_MUTUAL, MUTUAL (caCertificate required) "port" : "<Port_Number>", // port on which service is exposed as through servicemesh, not the port it is actually running on "serviceMesh": "istio", // get it from cluster record. Currently only istio is supported "istio-proxy": "<value>", // The features (mTLS, LB, Circuit breaking) are not avaialble to services without istio-proxy. Only inbound routing is possible. // Traffic configuration - Loadbalancing is applicable per service. The traffic to this service is distrbuted amongst the pods under it. "loadbalancingType": "<type>", // "Simple" and "consistentHash" are the two modes "loadBalancerMode": "<mode>" // Modes for consistentHash - "httpHeaderName", "httpCookie", "useSourceIP", "minimumRingSize", Modes for simple - "LEAST_CONN", "ROUND_ROBIN", "RANDOM", "PASSTHROUGH" "httpCookie": "<CookieName>" // Name of the cookie to maitain sticky sessions // Circuit Breaking "maxConnections": "" //connection pool for tcp and http traffic "concurrenthttp2Requests": "" // concurent http2 requests which can be allowed (only for HTTP/S traffic) "httpRequestPerConnection": "" //number of http requests per connection. Valid only for http traffic "consecutiveErrors": "" // Default is 5. Number of consecutive error before the host is removed from load balancing pool "baseEjectionTime" : "" // Default is 5, time for which the host will be removed from load balancing pool when it returns error for no of times more than "consecutiveErrors" limit "intervalSweep": "", //time limit before the removed hosts are added back to the load balancing pool. "connectTimeout": "" // only for TCP traffic // credentials for mTLS. "Servicecertificate" : "" // Present actual certificate here. "ServicePrivateKey" : "" // Present actual private key here. "caCertificate" : "" // present the trusted certificate to verify the client connection, Required only when mtls mode is MUTUAL // Access Control namespaces: [] // Workloads from this namespaces can access the inbound service serviceAccountAccess : {[ "<saName>": ["ACTION": "URI"], // for http "<saName>" : ["PORT": "27017"]} / for tcp } } RETURN STATUS: 201 RETURN BODY: { "name": "<>" "Message": "inbound service created" }
GET
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/us-to-us-intents/<name> RETURN STATUS: 201 RETURN BODY: { "metadata":{ "name": "<>" //unique name for each intent "description": "connectivity intent for micro-service to microservice communication" } spec:{ "inboundservicename": "<>" //actual name of the client service "protocol": "<>", "headless": "<>", // default is false. Option "True" will make sure all the instances of the headless service will have access to the client service "mutualTLS": "<>", // Support 2 modes. SIMPLE, MUTUAL with external client. For inter and intra cluster, mtls is enabled by default "port" : "<>", // port on which service is exposed as through servicemesh, not the port it is actually running on "serviceMesh": "<>", // get it from cluster record // Traffic configuration "loadbalancingType": "<>", // "Simple" and "consistentHash" are the two modes "loadBalancerMode": "<>" // Modes for consistentHash - "httpHeaderName", "httpCookie", "useSourceIP", "minimumRingSize", Modes for simple - "LEAST_CONN", "ROUND_ROBIN", "RANDOM", "PASSTHROUGH" "httpHeader": <> // Input for the hash when in "consistentHash" LB type and mode as "httpHeader" "httpCookie": <> // Input for Hash in "ConsistenHash" LB and mode as "httpCookie" . Name of the cookie to maitain stick sessions. "maxConnections": <> //connection pool for tcp and http traffic "timeOut" : <> // in Seconds. Connection timeout for tcp and idleTimeout for http // credentials for mTLS "Servicecertificate" : {serverCertificate.pem} // Present actual certificate here. Optional, default "", required only if mTLS is set to "MUTUAL" "ServicePrivateKey" : {serverPrivateKey.pem} // Present actual private key here. Required only if mTLS is "MUTUAL" "caCertificate": {caCertificate.pem} // file should contain the public certificates for all root CAs that is trusted to authenticate your clients // not required for cluster level communication } }
DELETE
DELETE URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/us-to-us-intents/servicehttpbin RETURN STATUS: 204
POST - with the client details
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/us-to-us-intent/{intent-name}/clients POST BODY: { "clientServiceName": "<name>", // Actual name of the client service. } RETURN STATUS: 201 RETURN BODY: { "name": "<name>" "Message": "Client created" }
GET - The Client resource
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/us-to-us-intents/{intent-name}/clients/sleep01 RETURN STATUS: 201 RETURN BODY: "clientService": { "clientServiceName": "<>", // if any then allow all the external applications to connect, check for serviceaccount level access "protocol": "<>" // Same as that of inbound service }
DELETE
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/us-to-us-intents/{intent-name}/clients/sleep01 RETURN STATUS: 204
Security Resource
POST
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/us-to-us-intents/{intent-name}/clients/sleep01/security/security-intent { ?? } RETURN STATUS: 204
Traffic Resource??
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/us-to-us-intents/{intent-name}/clients/sleep01/traffic/traffic-intent { } RETURN STATUS: 204
NOTE - The default authorization policy must have with "deny-all" under spec as we need to disable all the communication between microservices during istio installation
2. External service to access Inbound service - Inbound access
NOTE - These are the services whose nature is not known. These services are assumed to have FQDN as a point of connectivity
POST
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/inbound-intent/ POST BODY: { "name": <name> //unique name for each intent "description": <description> "inboundservicename": "mysql" //actual name of the client service "description": "bookinfo app", "protocol": "HTTP", "externalName": "", // Optional, default = "", Not required for Outbound access since the communication will be initialted from inboundservice "localDomain": "", // Optional, default = "", Update local network (cluster scope) DNS with records for '<externalName>.<localDomain>' "publicDomain": "", // Optional, default = "", Update public network (logical cloud scope) DNS with records for '<externalName>.<publicDomain>' "headless": "", // default is false. Option "True" will make sure all the instances of the headless service will have access to the client service "mutualTLS": "", // Setting this to true will create a dedicated egrees gateway for the service "httpbin01" on whichever cluster it is running on "port" : "", // port on which service is exposed as through servicemesh, not the port it is actually running on "serviceMesh": "", // get it from cluster record "loadbalancing": "", // optional } RETURN STATUS: 201 RETURN BODY: { "Message": "outbound connectivity intent creation success " "description": "Connectivity intent for inbound service to connect to external services" }
POST - External service to access inbound service
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/inbound-intent/{intent-name}/clients POST BODY: { "name": <name> //unique name for each intent "description": <description> "externalServiceName": {cnn.edition.com} // Only the FQDN of the service name is required "externalCaCertificate" : {clientCaCert.pem} // Present the actual client certificate } RETURN STATUS: 201 RETURN BODY: { "Message": "Success " "description": "External service given access to inbound service" }
Security
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/inbound-intent/{intent-name}/clients/client01/security { "name": <name> //unique name for each intent "description": <description> "externalAuthenticationissuer": "<>", "externalAuthenticationjwksURI" : "<>", "userAccess": [{userName: "<>", accessList:Action:["<URI>": "Action", "<URI>": "Action"]} ]// These are the external users and actions } RETURN STATUS: 204
3. Outbound access
POST -
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/outbound-intent/{intent-name}/clients/ POST BODY: { "name": "<name>" //unique name for each intent "description": <description> "inboundservicename": "<>" //actual name of the client service "protocol": "<>", "headless": "", // default is false. Option "True" will make sure all the instances of the headless service will have access to the client service "mutualTLS": "", // Support 2 modes. SIMPLE, MUTUAL with external client. For inter and intra cluster, mtls is enabled by default "port" : "", // port on which service is exposed as through servicemesh, not the port it is actually running on "serviceMesh": "", // get it from cluster record // Traffic configuration "loadbalancingType": "", // "Simple" and "consistentHash" are the two modes "loadBalancerMode": "" // Modes for consistentHash - "httpHeaderName", "httpCookie", "useSourceIP", "minimumRingSize", Modes for simple - "LEAST_CONN", "ROUND_ROBIN", "RANDOM", "PASSTHROUGH" "httpHeader": "" // Input for the hash when in "consistentHash" LB type and mode as "httpHeader" "httpCookie": "" // Input for Hash in "ConsistenHash" LB and mode as "httpCookie" . Name of the cookie to maitain stick sessions. "maxConnections": "" //connection pool for tcp and http traffic "timeOut" : "" // in Seconds. Connection timeout for tcp and idleTimeout for http // credentials for mTLS "Servicecertificate" : {serverCertificate.pem} // Present actual certificate here. Optional, default "", required only if mTLS is set to "MUTUAL" "ServicePrivateKey" : {serverPrivateKey.pem} // Present actual private key here. Required only if mTLS is "MUTUAL" "caCertificate": {caCertificate.pem} // file should contain the public certificates for all root CAs that is trusted to authenticate your clients // not required for cluster level communication } RETURN STATUS: 201 RETURN BODY: { "name": "<name>" "Message": "Inbound service created" }
POST - Provide access to an external service from inbound service
URL: /v2/projects/{project-name}/composite-apps/{composite-app-name}/{version}/traffic-intent-set/{set-name}/inbound-intent/ POST BODY: { "externalServiceName": "<name>" // Only the FQDN of the service name is required } RETURN STATUS: 201 RETURN BODY: { "Message": "Success " "description": "External service given access to inbound service" }
Development
- go API library - https://github.com/gorilla/mux
- backend - mongo - https://github.com/onap/multicloud-k8s/tree/master/src/k8splugin/internal/db - Reference
- intent to config conversion - use go templates and admiral? https://github.com/istio-ecosystem/admiral
- writing the config to etcd - WIP
- Unit tests and Integration test - go tests
External DNS - Design and intent API
See here: External DNS provider update design and intent API
External application communication intents
Considering DNS resolution, No DNS resolution (IP addresses), Egress proxies of the Service Mesh, Third-party egress proxy
User facing communication intents
Considering Multiple DNS Servers
Considering multiple user-facing entities
Considering RBAC/ABAC
Internal Design details
Guidelines that need to keep in mind
- Support for metrics that can be retrieved by Prometheus
- Support for Jaeger distributed tracing by including open tracing libraries around HTTP calls.
- Support for logging that is understood by fluentd
- Mutual exclusion of database operations (keeping internal modules accessing database records simultaneously and also by replication entities of the scheduler micro-service).
- Resilience - ensure that the information returned by controllers is not lost as the synchronization of resources to remote edge clouds can take hours or even days when the edge is not up and running and possibility of restart of scheduler micro service in the meantime.
- Concurrency - Support multiple operations at a time and even synchronizing resources in various edge clouds in parallel.
- Performance - Avoiding file system operations as much as possible.
Modules (Description, internal structures etc..)
....