SO Scalability Requirements
SO scalability will be supported by managing multiple SO instances by utilizing OOM.
In OOM, the number of SO component instances will be configured to control the number of active SO instances.
Target scalability will be supported. The MariaDB instance number could be different from the rest of SO components.
Each BPMN execution engine will be configured for a shared database, so the engine can be scaled promptly and ready to handle assignments.
SO endpoints will be registered to MSB for communication load-balancing.
SO run-time scalability handling
SO will have multiple Camunda Execution engine instances which share the centralized data store.
The centralized data store will be replicated, and the replication will be transparent to other SO components.
The individual execution engine instances do not maintain session state across transactions.
The complete state is flushed out to the shared database when a process instance is complete or waiting for events (e.g., asynchronous event, message, human task, etc.).
Or, asynchronous continuations can be used during the workflow design when it is necessary to control save points actively (by design) and flush out the process instance states to the database.
Once a process instance is passivated, another engine instance can pick up and execute the remaining process instance flows.
Multiple SDC distribution client instances will be instantiated.
A SDC notification will be routed to (or picked up by) one of the SDC notification client instances. Then, the assigned client instance will:
query for templates/models from SDC.
parse the template/models and store in the Catalog DB.
Due to less frequent templates/models changes and SDC notification client activities, a small number (2) of SDC distribution client instances can be configured.
Multiple API handler instances will be instantiated, and all of the instances are active (active-active).
The requests from VID, External API and UUI towards the API handler instances will be distributed/routed via load-balancing. MSB is expected to handle their load-balancing.
An assigned API handler instance will communicate with the orchestration execution engine and Data store in a scalable manner.
Communications (invoking BPMN execution) with the orchestration execution engine will be done through MSB, no direct connection with hard-coded endpoints.
For storing requests and select recipes, the API Handler will communicate with the Data store (Request DB, Service Catalog), which is replicated.
Multiple Resource/Controller Adapters will be instantiated for active-active operations.
The communications between the BPMN/TOSCA resource recipes and the adapter instances will be load-balanced through MSB.
External communications with other ONAP components such as DACE, OOF, A&AI, SDNC, etc. will be done through MSB/DMaaP in a scalable manner like the above communication requirements.
Note
ARIA Orchestrator
ARIA Orchestrator is not addressed here because we are not sure about its location.
TBD
SO components are being refactored. Additional modules can be identified.