Purpose:
Main purpose is to ensure that metric/log collection, correlation & analysis, and closed loop actions are performed closer to the data.
Analytics can be infrastructure analytics, VNF analytics or application analytics. But in Dublin, analytics-as-a-service is proved using infrastructure analytics.
- Big Data analytics to make sure that the analysis is accurate.
- Big data frameworks to allow usage of machine learning and deep learning.
- Avoid sending large amount of data to ONAP-Central for training, by letting training happen near the data source (Clolud-regions).
- ONAP scale-out performance, by distributing some functions out of ONAP-Central such as Analytics
- Letting inferencing happen closer to the edges/cloud-regions for future closed loop operations, thereby reducing the latency for closed loop.
- Opportunity to standardize Infra analytics events/alerts/alarms through output normalization across ONAP-based and 3rd party analytics application.
Owner : Dileep Ranganathan (Intel), TBD (from VMWare)
Participating Companies: Intel, VMware
Operator Support: China Mobile, Vodafone
Parent page: Edge Automation Functional Requirements for Dublin
Link to presentation documents : Distributed Analytics-as-a-Service presentations
Use Case Name : Distributed Analytics
Why we are terming this as use case instead of functional requirement?
Initially, distributed analytics is projected as 'functional requirement' as it was felt that all existing use-cases can leverage distributed analytics. Due to various reasons such as - not able to leverage DCAE/CLAMP due to resource issues, significant work in basic work to deploy analytics framework and analytics application sucha s deployment & configuration, it was felt to work on this basic work in R4. As part of basic work, existing use cases will not be integrated. This basic work only consists of generic deployment and configuration, which normally does not require enhancements to existing source code. But, it will require creation of new micro-services that would be deployed in cloud regions.
Showcase | Test Environment | Integration Team Liaison |
---|---|---|
Deploy PNDA based analytics framework using ONAP as any other workload | Intel/Windriver Lab, VMware Lab (TBD) | TBD |
Deploy generic services related to infrastructure analytics as any other workload | Intel/Windriver Lab, VMware Lab (TBD) | TBD |
Deploy test analytics application as any other workload | Intel/Windriver Lab, VMware Lab (TBD) | TBD |
Dublin focus
- Creation of Helm charts for analytics framework. Two packages
- Standard package (with all SW) and inferencing package (Minimal).
- Deployment of Analytics framework in the cloud-regions that are based on K8S. Identify any gaps and work with "K8S based Cloud region support" team to fix them.
- Cloud infra Event/Alert/Alarm/Fault Normalization & Dispatching microservice deployment on K8S.
- Spark Application management with PNDA deployment manager (to dispatch application image to various cloud regions)
- ML/DL Model management & Dispatcher (Stretch goal)
- Analytics Application (consisting of multiple components) configuration profile support using Multi-Cloud/K8S configuration service. Develop config-sync plugin.
- Development of Collection and Distribution Service - CollectD to Kafka (CollectD-kafka/avro)
- Collection and Distribution Service - Node-export & cAdvisor to Kafka (Stretch Goal)
- ONAP alarm Event dispatcher micro-service (ONAP--event-dispatcher)
- Make TCA application generic or create a a simple TCA application (since it needs to run in cloud-region that does not have ONAP specific components) to run on any spark based framework (Get input using Kafka, Get configuration updates via Consul directly, Output via Kafka) : TCA-spark application (for testing)
- 3rd Party Infra Analytics application aligning with the output of generic TCA application.
- Creation of set of helm charts 'infra analytics base' consisting of following
- Daemon set consisting of "CollectD & collectD-config-agent'.
- CollectD-Kafka/avro
- Node-exporter-to-Kafka/Avro
- cAdvisor-to-kafka/Avro
Dublin Assumptions:
- Kubernetes support in Cloud regions ( support others in future. What to be supported is TBD)
- PNDA as a base (Alignment with DCAE – DCAE already decided to use PNDA framework)
- Spark framework even for both training and inference ( future - make the inference as a Micro Service for easier deployment and make the inference as set of executable to be deployed even within application/NF workload or in the compute node)
- Full framework instantiation ( Future - work with partial deployment that already exists (For example, support existing HDFS deployment by only instantiating the other components))
- Instantiates in new name space (not on existing namespace) in remote cloud regions
- Dynamic configuration updates to analytics applications will be using Consul in Dublin. Other mechanisms for further study.
- Closed loop actions are performed at the ONAP-Central.
DCAE/CLAMP integration
It was felt that DCAE integration can't happen in R4 due to lack of understanding and resources. Hence the intention is to develop common items in R4 and integrate with DCAE/CLAMP in future releases. But, during Dublin time frame, like to do following though.
- Understand on how DCAE/CLAMP can play role in analytics-as-a-service.
- Work done elsewhere that is happening in R4. These are dependencies for DCAE/CLAMP integration with analytics-as-a-service
- PNDA is integration in DCAE
- Understand how cloudify plugin works in SO
- Helm charts based analytics app description support in Cloudify
- Cloudify HA support
- Dynamic configuration support
- Dedicated analytics app for VNFs.
- Identify work items
- Create E2E sequence flows.
Why we have chosen SDC/SO/OOF/MC approach to deploy analytics framework and analytics applications
Analytics applications in cloud-regions are being treated as any other workloads for following reasons
- Bring up analytics applications along with the VNF - Yet times, analytics applications are dedicated to the VNF. Analytics applications are expected to be brought up when VNF is being brought up and terminate when VNF is terminated. Hence, it is felt that analytics application is also described in the same service as VNF.
- Need for bringing up analytics application in the same place as VNF - It can be achieved using VNF and analytics-app affinity rules and hence need to be part of the service.
- Need for bringing up analytics applications in compute nodes having accelerators (e.g ML/DL) - ONAP/OOF can provide this functionality.
- Need for bringing up analytics applications in the right cloud regions based on cost and distance from the Edge locations - ONAP/OOF can provide this functionality.
- Consistent configuration orchestration across components of analytics applicatons - Leverage MC provided configuration service even for analytics applications
- Configuration of dependent services or VNFs/NFVI/existing-services - Yet times, configuration of dependent services is required as part of bringing up analytics application. Also, when analytics-app is terminated, added configuration needs to be removed. Since, this requirement is same for VNFs, same facilities can be leveraged here too.
Impacted Projects
Project | PTL | JIRA Epic / User Story* | Requirements |
---|---|---|---|
Demo repository |
| ||
Demo repository |
| ||
OOM |
| ||
Multi-VIM/Cloud |
| ||
Multi-VIM/Cloud | Bin Yang | Cloud infra Event/Alert/Alarm/Fault Normalization & Dispatching microservice development (see analytics intent example)
|
*Each Requirement should be tracked by its own User Story in JIRA
Analytics Intent Example
- “Infrastructure Analytics as service for Alerts at Cluster Level and Host Level for a Cloud Region”
Capabilities (corresponding to Intent) Example:
- Cluster (OpenStack Host Aggregate) Level & Host Level Alerts for compute resources
- Cluster has unexpected high CPU workload
- Cluster has memory contention caused by less than half of the virtual machines
- Cluster has memory contention caused by more than half of the virtual machines
- Note: Cluster CPU Threshold & Memory Threshold are defined separately
- Ref: https://docs.vmware.com/en/vRealize-Operations-Manager/6.6/vrealize-operations-manager-66-reference-guide.pdf
Testing
Current Status
Testing Blockers
- High visibility bugs
- Other issues for testing that should be seen at a summary level
- Where possible, always include JIRA links
End to End flow to be Tested
Same as vFW (TBD)
Test Cases and Status
# | Test Case | Status |
---|---|---|
1 | There should be a test case for each item in the sequence diagram | NOT YET TESTED |
2 | create additional requirements as needed for each discreet step | COMPLETE |
3 | Test cases should cover entire Use Case | PARTIALLY COMPLETE |
4 | Test Cases should include enough detail for testing team to implement the test | FAILED |