Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 24 Next »


Purpose:

Main purpose is to ensure that metric/log collection, correlation & analysis, and closed loop actions are performed closer to the data.

Analytics can be infrastructure analytics, VNF analytics or application analytics. But in Dublin, analytics-as-a-service is proved using infrastructure analytics.

  • Big Data analytics to make sure that the analysis is accurate.
  • Big data frameworks to allow usage of machine learning and deep learning.
  • Avoid sending large amount of data to ONAP-Central for training, by letting training happen near the data source (Clolud-regions).
  • ONAP scale-out performance, by distributing some functions out of ONAP-Central such as Analytics
  • Letting inferencing happen closer to the edges/cloud-regions for future closed loop operations, thereby reducing the latency for closed loop.
  • Opportunity to standardize Infra analytics events/alerts/alarms through output normalization across ONAP-based and 3rd party analytics application.

Owner :  Dileep Ranganathan (Intel), TBD (from VMWare)

Participating Companies: Intel, VMware

Operator Support: China Mobile, Vodafone

Parent page: Edge Automation Functional Requirements for Dublin

Link to presentation documentsDistributed Analytics-as-a-Service presentations

Use Case Name : Distributed Analytics

Why we are terming this as use case instead of functional requirement?

Initially, distributed analytics is projected as 'functional requirement' as it was felt that all existing use-cases can leverage distributed analytics. Due to various reasons such as - not able to leverage DCAE/CLAMP due to resource issues,  significant work in basic work to deploy analytics framework and analytics application sucha s deployment & configuration, it was felt to work on this basic work in R4. As part of basic work, existing use cases will not be integrated.   This basic work only consists of generic deployment and configuration, which normally does not require enhancements to existing source code. But, it will require creation of new micro-services that would be deployed in cloud regions. 

ShowcaseTest EnvironmentIntegration Team Liaison
Deploy PNDA based analytics framework using ONAP as any other workloadIntel/Windriver Lab, VMware Lab (TBD)

TBD

Deploy generic services related to infrastructure analytics as any other workloadIntel/Windriver Lab, VMware Lab (TBD)TBD
Deploy test analytics application as any other workloadIntel/Windriver Lab, VMware Lab (TBD)TBD

Dublin focus

  1. Creation of Helm charts for analytics framework.  Two packages
    •  Standard package (with all SW) and inferencing package (Minimal).
  2. Deployment of Analytics framework in the cloud-regions that are based on K8S. Identify any gaps and work with "K8S based Cloud region support" team to fix them.
  3. Cloud infra Event/Alert/Alarm/Fault Normalization & Dispatching microservice deployment on K8S.
  4. Spark Application management with PNDA deployment manager (to dispatch application image to various cloud regions)
  5. ML/DL Model management & Dispatcher (Stretch goal)
  6. Analytics Application (consisting of multiple components) configuration profile support using Multi-Cloud/K8S configuration service. Develop config-sync plugin.
  7. Development of Collection and Distribution Service - CollectD to Kafka (CollectD-kafka/avro)
  8. Collection and  Distribution Service - Node-export & cAdvisor to Kafka (Stretch Goal)
  9. ONAP alarm Event dispatcher micro-service (ONAP--event-dispatcher)
  10. Make TCA application generic or create a a simple TCA application (since it needs to run in cloud-region that does not have ONAP specific components) to run on any spark based framework (Get input using Kafka, Get configuration updates via Consul directly, Output via Kafka) : TCA-spark application (for testing)
  11. 3rd Party Infra Analytics application aligning with the output of generic TCA application.
  12. Creation of set of helm charts 'infra analytics base' consisting of following
    1. Daemon set consisting of "CollectD & collectD-config-agent'.
    2. CollectD-Kafka/avro
    3. Node-exporter-to-Kafka/Avro 
    4. cAdvisor-to-kafka/Avro

Dublin Assumptions:

  • Kubernetes support in Cloud regions ( support others in future. What to be supported is TBD)
  • PNDA as a base  (Alignment with DCAE – DCAE already decided to use PNDA framework)
  • Spark framework even for both training and inference (  future -  make the inference as a Micro Service for easier deployment and make the inference as set of executable to  be deployed even within application/NF workload or in the compute node)
  • Full framework instantiation ( Future -  work with partial deployment that already exists (For example, support existing HDFS deployment by only instantiating the other components))
  • Instantiates in new name space (not on existing namespace) in remote cloud regions
  • Dynamic configuration updates to analytics applications will be using Consul in Dublin. Other mechanisms for further study.
  • Closed loop actions are performed at the ONAP-Central.

DCAE/CLAMP integration

It was felt that DCAE integration can't happen in R4 due to lack of understanding and resources. Hence the intention is to develop common items in R4 and integrate with DCAE/CLAMP in future releases. But, during Dublin time frame, like to do following though.

  • Understand on how DCAE/CLAMP can play role in analytics-as-a-service.
  • Work done elsewhere that is happening in R4. These are dependencies for DCAE/CLAMP integration with analytics-as-a-service
    • PNDA is integration in DCAE
    • Understand how cloudify plugin works in SO
    • Helm charts based analytics app description support in Cloudify
    • Cloudify HA support
    • Dynamic configuration support
    • Dedicated analytics app for VNFs.
  • Identify work items 
  • Create E2E sequence flows.


Why we have chosen SDC/SO/OOF/MC approach to deploy analytics framework and analytics applications

Analytics applications in cloud-regions are being treated as any other workloads for following reasons

  • Bring up analytics applications along with the VNF - Yet times, analytics applications are dedicated to the VNF. Analytics applications are expected to be brought up when VNF is being brought up and terminate when VNF is terminated. Hence, it is felt that analytics application is also described in the same service as VNF.
  • Need for bringing up analytics application in the same place as VNF -  It can be achieved using  VNF and analytics-app affinity rules and hence need to be part of the service.
  • Need for bringing up analytics applications in compute nodes having accelerators (e.g ML/DL) -  ONAP/OOF can provide this functionality.
  • Need for bringing up analytics applications in the right cloud regions based on cost and distance from the Edge locations - ONAP/OOF can provide this functionality.
  • Consistent configuration orchestration across components of analytics applicatons - Leverage MC provided configuration service even for analytics applications
  • Configuration of dependent services or VNFs/NFVI/existing-services -  Yet times, configuration of dependent services is required as part of bringing up analytics application.  Also, when analytics-app is terminated, added configuration needs to be removed.  Since, this requirement is same for VNFs,  same facilities can be leveraged here too.


EPIC stories:

ONAPARC-280 - Getting issue details... STATUS

Impacted Projects 

ProjectPTLJIRA Epic / User Story*Requirements
Demo repository

  1. A repository to keep reference ML/DL Model Management and spark application image management service
  2. Reference collection Services to create e2e demo: Collectd-to-Kafka/Avro, Node-exporter-to-Kafka/Avro and cAdvisor-to-Kafka/Avro
  3. Reference ONAP event dispatcher services for e2e demo
  4. Demo spark analytics app
  5. Reference configuration synchronization container
Demo repository

  1. Helm Charts for PNDA based analytics framework packages
  2. Helm Charts for 'infra analytics base'
  3. Helm Charts for various analytics applications
  4. Helm Chart for Cloud infra Event/Alert/Alarm/Fault Normalization & Dispatching microservice
Multi-VIM/Cloud


  1. Add new config-service plugin to work with Edge side Consul/etcd for configuration (Leaning towards consul)
Multi-VIM/CloudBin Yang

Cloud infra Event/Alert/Alarm/Fault Normalization & Dispatching microservice development

  1. Integrate DMaaP (Kafka) client for communication to ONAP Central 
  2. Receive Event/Alert/Alarm/Fault from infra analytics application
  3. Normalize from cloud specific Event/Alert/Alarm/Fault format to cloud agnostic (ONAP internal) Event/Alert/Alarm/Fault format
  4. Dispatch Event/Alert/Alarm/Fault to ONAP central using DMaaP (Kafka) client

*Each Requirement should be tracked by its own User Story in JIRA 

Testing

Current Status

  1. Testing Blockers

  2. High visibility bugs
  3. Other issues for testing that should be seen at a summary level
  4. Where possible, always include JIRA links


End to End flow to be Tested

Same as vFW (TBD)


Test Cases and Status


#Test CaseStatus
1There should be a test case for each item in the sequence diagram

NOT YET TESTED

2create additional requirements as needed for each discreet step

COMPLETE

3Test cases should cover entire Use Case

PARTIALLY COMPLETE

 Test Cases should include enough detail for testing team to implement the test

 FAILED

  • No labels