Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
exclude1

...

Permanently persist the data that flows through ONAP, and provide ready-to-use data analytics applications built on the data.

Background

There are huge amounts large amount of data flowing among ONAP components, mostly via DMaaP and Web Services. For example, all field events collected by DCAE collectors go through DMaaP. DMaaP is backed by Kafka, which is a system for Publish-Subscribe, where data is not meant to be permanent and gets deleted after certain retention period. Though some components may store processed result into their local databases, most of the raw data will eventually lost. We should store the these data, which will could provide insight to the network operation, by way of Big Data with help aata analytics and machine learning technologies. In this project, we start by persisting all the raw data though DMaaP, 

Project Description

In this project, we try towill:

  1. Provide a systematic way to real-time ingest DMaaP data to a few selected Big Data storage systems, such as, but not limit to, Couchbase, a distributed document-oriented database, Druid, a data store designed for low-latency OLAP analytics, and HBase, a Hadoop database for mass batch processing. What data goes to which databases is configurable, depending on what problems we try to solve, and the results we want to achieve. For example, storing data in Druid, a OLAP storage, we can integrate it with OLAP tools like Superset, and time series tools like Grafana. In the future, new requirements may require we support supporting additional storage systems.
  2. Provide sophisticated and ready-to-use interactive analytics tools that are built on the data. These tools fall into two categories: integrated third party data analytics tools, such as Superset and Grafana, and custom applications developed by us. Custom applications includes ETL applications, Big Data analytics programs developed in Spark framework, and Machine Learning models. While integrated third party tools are mostly for system operators (human beings) with GUI interfaces, custom applications' results are consumed by both system operators and programs like ONAP components and external systems (e.g. OSS/BSS). 

Architecture

Image RemovedImage Added

The data storage and associated tools are external infrastructures to ONAP, to be installed only once initially, or making use of existing infrastructures. Since costume setting and applications will be deployed to and run on them, they are really integrated parts of DataLake. 

...

  • Provide admin REST API for configurations and topic management. A topic can be configured to be exported to which data stores, with Couchbase and Druid supported initially, and TTL (Time To Live) in the stores. We will support more distributed databases in the future if needed.

  • Provide Admin GUI to manage the dispatcher, making use of the above admin REST API. It also manages the analytics tools and applications.

...

  • Monitor selected topics, real-time pull the data and insert it into Couchbase, one table for each topic, with the same table name as the topic name.

  • Data types JSON, XML, and YAML are auto converted into native store  schema. We may support additional formats. Data not in these formats is stored as a single string. 

  • Provide REST API for data query, while applications can access the data through native API as well.

  • Couchbase supports Spark direct running on it, which allow complicate analytics tools to be built. We will develop Spark analytics applications if needed.

  • Other ONAP components can take advantage this to store their operational data. If we need to run heavy analytics jobs on historical data, we should separate the operational data from historical data. Otherwise we have the option to have both to coexist, due to Couchbase's scalability.

...

  • Monitor selected topics, real-time pull the data and insert it into Druid, one datasource for each topic, with the same datasource name as the topic name.

  • Extracts the dimensions and metrics from JSON files, and pre-configure Druid settings for each datasource, which is customizable through a web interface.

  • Integrate Apache Superset for data exploration and visualization, and provide pre-builds interactive dashboards. 

  • Integrate Grafana for time series analytics.

...