This page is mostly a wishful thinking. It does not reflect the current state of ONAP security. It's rather where we would like to be.
ONAP introduction
Abstract ONAP Architecture
Abstractly, ONAP is an independent software system that exposes Northbound interfaces for User, Admin and OSS/BSS systems and Southbound interfaces for xNFs (VNF, CNF, PNF). ONAP uses interfaces provided by NFVi and xNFs.
ONAP deployed on kubernetes
In the early releases, ONAP was deployed on VMs. ONAP is now virtualized using containers orchestrated by Kubernetes (K8S). ONAP uses interfaces exposed by K8S.
ONAP deployed on kubernetes with external databases
Most ONAP components require a data persistence layer, implemented using a databases. In early releases, most ONAP components had their own databases. As the platform has matured, components have moved to shared databases. A logical progression to make the platform simpler to deploy in an operator environment, is to create interfaces that allow an operator to configure ONAP to use external DB engines in the operators environment.
ONAP deployed on K8S with external databases and external identity and access management (IAM)
ONAP includes AAF, an identity management system that supports authentication, authorization, identity lifecycle management (ILM), and certificate management, including a certificate authority (CA) designed to support a lab deployment. It is likely that an operator will want to integrate ONAP with their IAM system, thus ONAP needs to support standard IAM protocols.
- TO DO: specify the protocols
- LDAP
- ...
ONAP deployed on kubernetes with external databases and external IAM and external CA
Most of the operators probably already have Certificate Authority server running in their network and a requirement that all services should present a valid certificate signed by this CA. This means that ONAP should provide the ability to integrate with external CA instead of shipping own one.
Defining system boundaries
Provided interfaces
- Admin/User/OSS/BSS interfaces are REST.
- xNF southbound interfaces are VES events (protocol depends on the collector used)
Used interfaces
- Kubernetes interface is REST. Exact supported version of kubernetes has to be specified by every ONAP release
- Database interface depends on DB type but only encrypted communication should be used
- xNF interface depends on particular xNF but all xNFs should support secure protocols for communication
- NFVI interface is REST (usually OpenStack or Kubernetes)
- IAM interface is Open ID Connect
- if operator already has OIDC compatible solution ONAP should just use it
- if operator has Identity Provider (LDAP/Kerberos/etc) external OIDC solution should be deployed (ie keycloak) with operator IdP configured as backend
- In testing environment external OIDC solution should be deployed and bootstraped with test users
- CA interface can be one of:
- Manual interaction by deployer that will retrieve certificates and the bootstrap ONAP instance with them
- One of automated certificate retrieval protocols (ACME, CMPv2 etc)
- In testing environment external CA (and ONAP should use automated certificate retrieval as described in b) solution should be deployed
Requirements on interfaces
Kubernetes
- Cluster should be configured according to CIS Kubernetes Benchmark
- Encryption at rest should be properly configured to ensure that secrets are never stored in the plain text
Databases
- Each DB should be configured according to corresponding CIS guideline
- All DB should be already created or ONAP should be provided with user that is capable of creating DB
- If ONAP creates a DB a dedicated user account with privileges limited to that DB should be created. Password used for this user cannot be hardcoded in ONAP source.
xNF
- Define by theĀ ONAP VNF security requirements
NFVI
- Defined by the CNTT Reference Architecture 1 & 2
IAM
- IAM must support OpenID Connect standard
CA
- If automated certificate retrieval is used one of .... has to be supported by the CA (CMPv2, ACME, SCEP)
Requirements on exposed interfaces
- North and south interfaces should be separated (ie different instance of ingress controller) to provide operator deployment flexibility
- All Northbound interfaces musts be protected using TLS
- All Northbound interfaces must support SSO
- All Northbound interfaces must support RBAC
- All roles used in ONAP have to be documented
- All forms should validate and sanitize their input provided by the user
- Southbound interfaces must satisfy VNF security requirements
- ...
Internal ONAP security requirements
- ONAP should not include any user database
- ONAP should not implement RBAC on it's own but depend on external component to provide it
- ONAP should not implement CA functionality but depend on external component to provide it
- ONAP components should use mTLS instead of username/password for authentication between each other
- ONAP should configure network policies so that only desired components can communicate with each other
- ONAP must store all sensitive material (keys, passwords) in Kubernetes secrets
- ONAP docker images have to be hardened (see CIS Docker Benchmark)
- ONAP must use only approved docker base images
- ONAP should log all important events to a centralized location
- ONAP should log security audit logs to a secure location
- ONAP logs cannot include any secret material (e.g., passwords and keys)
- All ONAP components must support OIDC
- ...
Current ONAP security model
Cloud-Native ONAP security model
- Every component in its own namespace
- All "common" components in separate namespaces
- No implicit dependencies between common components and ONAP
- No nodeports unless really required
- istio-ingress used as ingress controller
- Up to 4 entrypoints for deployment. For example
- simpledemo.onap.org (UI)
- south.simpledemo.onap.org (southbound interfaces)
- iam.simpledemo.onap.org (keycloak)
- api.simpledemo.onap.org (API for OSS/BSS)
- Every entrypoint exposed as a separate ingress instance
- Every ingress gateway terminates the TSL and re-encrypts the payload before sending to the destination component using mTLS
- ISTIO network policy must be configured so that only authorized services can communicate with each other
- Auth between services done using certs via mTLS
- OpenID Connect used to authenticate user
- In testing deployment keycloak is used but can be replaced with anything else compatible with OIDC
- Cert-manager and citadel used to retrieve certificates
- Kubernetes is configured to use encryption at rest plugin
- ISTIO automated sidecar injection is configured in underlying Kubernetes
- No root pods
- All DB considered external
- Documented roles
- Ability to integrate with LDAP, Kerberos, AAF as IdP
- Ability to retrieve the certificate from external CA