Table of Contents
...
There are currently three Ubuntu 18.04 servers: node1-1, node2-1 and node2-2, which are managed by OpenStack.
Node1-1 is the controller node, and node2-1 and node202 are compute nodes.
We have installed ONAP using the OOM Rancher/Kubernetes instructions into five VMs.
Development
- There is a transition from http ports to https ports, so that communications are protected by TLS encryption.
- However the transition is piecemeal and spread over multiple ONAP releases, so individual projects still have vulnerabilities to due intra-ONAP dependencies, e.g.
out of a total ofJira Legacy server System Jira serverId 4733707d-2057-3a0f-ae5e-4fd8aff50176 key OJSI-97
.Jira Legacy server System Jira jqlQuery text ~ "plain text http" ORDER BY updated DESC count true serverId 4733707d-2057-3a0f-ae5e-4fd8aff50176 - A node-to-node VPN (working at the level of the VM or physical servers that host the Kubernetes pods/docker containers of ONAP) would provide blanket coverage of all communications with encryption.
- A node-to-node VPN is both
- an immediate stopgap solution in the short-term to cover the exposed plain text HTTP ports
- an extra layer of security in the long-term to thwart unforeseen gaps in the use of HTTPS ports
Discussion
- There has already been discussion and recommendation for using Istio https://istio.io/
- Istio Envoy is deployed within each pod using sidecar-injection, then stays in the configuration when the pods are restarted
- Istio Envoy probably appears within each pod as a network bridge, such as Kubernetes cluster networking bridge cbr0, thereby controlling all network traffic within the pod
- Istio Envoy provides full mesh routing but can also provides control of routing with traffic management and policies
- Istio Envoy also provides telemetry in addition to the security of mutual TLS authentication
- Istio Citadel is run in the environment as the certificate authority / PKI supporting the mutual TLS authentication
- Istio appears to have only a single overall security domain (i.e. the environment that includes Mixer, Pilot, Citadel and Galley), though it does contain many options to distinguish different services, users, roles and authorities
...
- WireGuard aims to be as easy to configure and deploy as SSH. A VPN connection is made simply by exchanging very simple public keys – exactly like exchanging SSH keys – and all the rest is transparently handled by WireGuard. It is even capable of roaming between IP addresses, just like Mosh. There is no need to manage connections, be concerned about state, manage daemons, or worry about what's under the hood
- WireGuard securely encapsulates IP packets over UDP. You add a WireGuard interface, configure it with your private key and your peers' public keys, and then you send packets across it. All issues of key distribution and pushed configurations are out of scope of WireGuard. In contrast, it more mimics the model of SSH and Mosh; both parties have each other's public keys, and then they're simply able to begin exchanging packets through the interface
- WireGuard works by adding a network interface (or multiple), like eth0 or wlan0, called wg0 (or wg1, wg2, wg3, etc). This network interface can then be configured normally using ifconfig(8) or ip-address(8), with routes for it added and removed using route(8) or ip-route(8), and so on with all the ordinary networking utilities. The specific WireGuard aspects of the interface are configured using the wg(8) tool. This interface acts as a tunnel interface.
- At the heart of WireGuard is a concept called Cryptokey Routing, which works by associating public keys with a list of tunnel IP addresses that are allowed inside the tunnel. Each network interface has a private key and a list of peers. Each peer has a public key. Public keys are short and simple, and are used by peers to authenticate each other. They can be passed around for use in configuration files by any out-of-band method, similar to how one might send their SSH public key to a friend for access to a shell server
- The client configuration contains an initial endpoint of its single peer (the server), so that it knows where to send encrypted data before it has received encrypted data. The server configuration doesn't have any initial endpoints of its peers (the clients). This is because the server discovers the endpoint of its peers by examining from where correctly authenticated data originates. If the server itself changes its own endpoint, and sends data to the clients, the clients will discover the new server endpoint and update the configuration just the same. Both client and server send encrypted data to the most recent IP endpoint for which they authentically decrypted data. Thus, there is full IP roaming on both ends
- WireGuard sends and receives encrypted packets using the network namespace in which the WireGuard interface was originally created. This means that you can create the WireGuard interface in your main network namespace, which has access to the Internet, and then move it into a network namespace belonging to a Docker container as that container's only interface. This ensures that the only possible way that container is able to access the network is through a secure encrypted WireGuard tunnel
The most obvious usage of this is to give containers (like Docker containers, for example) a WireGuard interface as its sole interface.
A less obvious usage, but extremely powerful nonetheless, is to use this characteristic of WireGuard for redirecting all of your ordinary Internet traffic over WireGuard.
It turns out that we can route all Internet traffic via WireGuard using network namespaces, rather than the classic routing table hacks.
...