...
Attachments (including meeting recording)
Attachments |
---|
Chat Server
private group aai-dev on Rocketchat server: http://onap-integration.eastus.cloudapp.azure.com:3000/group/aai-dev
Agenda Items
START RECORDING
Title | Responsible | Status | Last discussed | Notes | aai-cassandra performance issues | Keong Lim |
| Michael O'Brien has documented performance issues in aai-cassandra:
hector has discovered that the stress test jar (liveness probe?) in aai-cassandra is hammering the cpu/ram/hd on the vm that aai is on - this breaks the etcd cluster (not the latency/network issues we suspected that may cause pod rescheduling) Is there something that should be tweaked in AAI config? Or documentation on the recommended setup to run the VM? I'll come to the next AAI meet (conflicts with pomba meet) -
| Schema Service |
---|
Status | ||||
---|---|---|---|---|
|
Discuss about the Schema Microservice
11th Oct: Suggested Use Case Proposals for Dynamic AAI Schema Changes based on CCVPN usecase experience
1st Nov: William Reehil Robby Maharajh Venkata Harish Kajur will review requirement updates and research the open questions so that a final draft can be prepared for implementation
8th Nov: Added AAI Schema Service Use Case Proposals for discussion and planning
15th Nov: Reviewed the Requirements section in AAI Schema Service again.
Status | ||||
---|---|---|---|---|
|
William Reehil wrote introduction to (proposal for?) A&AI GraphGraph
What must the solution provide?
- a generic interface to interact with the schema and edge information, so it can be accessed via end users and microservices
- ability to be configured with any schema given in the set formats (JSON for edge rules, OXM for schema)
- ability to easily communicate to an end user a node type’s attributes and edge rules when provided with a node type as input
Looks like an API to reflect on the schema from an instance in the database.
Is there some overlap with the AAI Schema Services?
Is this leveraging AAI Schema Services e.g. as client? proxy? facade? implementation detail?
Is it an alternative to AAI Schema Services?
Update: Keong Lim I do see an overlap with the schema service mainly for the retrieval of the data, but GraphgGraph will offer more on top of that(UI, NLP), during the POC code for GraphGaph there was no schema service. Implementation details can be discussed on our call for how exactly to leverage the schema service, the poc code, and where the core logic resides for this functionality.
15th Nov: Reviewed A&AI GraphGraph and agreed that there is now overlap with AAI Schema Service that should be addressed in updating the POC version.
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
In the copyright
/**
* ============LICENSE_START=======================================================
* org.onap.aai
* ================================================================================
* Copyright © 2017-2018 AT&T Intellectual Property. All rights reserved.
If there is a company other than AT&T the build fails saying the license header is wrong
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
Looking at AAI usage in OOF - HPA guide for integration testing by Dileep Ranganathan, wondering whether there is a better way to bootstrap AAI test data?
Generating AAI data
Note: Required only if the Multicloud has no real cloud-regions and HPA discovery cannot happen.
If Multicloud team has data for creating the Cloud-region and doesn't have the HPA, then please update the existing data with the flavors with HPA.
- Import the postman collection CASABLANCA_AAI_postman.json
- To add/remove HPA Capabilities edit the flavors section in the body of PUT Cloud-Region{x}
- Once all the necessary Use postman to add the complex and cloud regions in the order specified below
(snip screenshot of specific sequence)- Use the GET requests to verify the data.
(snip screenshot of specific sequence)
Similarly, Scott Seabolt and J / Joss Armstrong wrote for APPC Sample A&AI Data Setup for vLB/vDNS for APPC Consumption and Script to load vLB into AAI:
The below put_vLB.sh script can be used to submit the vLB data to A&AI in order to run ConfigScaleOut use case. This script and referenced JSON files are used on an AAI instance where the cloud-region and tenant are already defined.
Similarly
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
One for VIM: How-To: Register a VIM/Cloud Instance to ONAP
Potential issues:
- fragility of static import data file w.r.t. schema changes and version upgrades for each ONAP release?
- how "common" is this knowledge, i.e. what to load, where to get it, who else should be using it, etc?
- should it be automated/scripted, rather than manual steps to bootstrap?
- should it be a simulator program or test harness, rather than a static data file?
- should it reside within AAI CI/CD jobs for maintenance and upgrade of schema versions?
- who maintains the data itself? Is there a "data repository" which can be delegated to other teams, e.g. like documentation repository links in git?
- how many other teams have similar private stashes of AAI bootstrap data?
Status | ||||
---|---|---|---|---|
|
Under OOF Homing and Allocation Service (HAS) section, Dileep Ranganathan wrote about Project Specific enhancements:
Optimize - AAI cache
- Use MUSIC or any other alternative in memory caching like Redis etc?
- Optimize flavor retrieval from A&AI and Cache the information if necessary
Similarly to the "AAI too slow for Holmes" item below, this introduction of extra caching of AAI data is a worrisome development and sad indictment of the performance of the system architecture.
What can we do about this?
Status | ||||
---|---|---|---|---|
|
Under POMBA Common Model, Geora Barsky and Sharon Chisholm discuss objects that seem to overlap the AAI schema:
- Network has direct relationships with PNFs and physical links
- PNF has direct relationship with p-interfaces and l-interfaces
Curious to know what is the relationship between the POMBA Common Model and the AAI schema? There seems to be an overlap in these object definitions and relationships.
Is POMBA a potential client for AAI schema services?
15th Nov: As per Geora's and Sharon's comments:
POMBA is a client of AAI APIs. It retrieves certain objects from AAI and transforms it into POMBA COMMON model which is aimed to represent flat structure of service instance representation.
and
The POMBA model enables us to normalize data from different data sources to facilitate auditing. To save time and work, we often use A&AI as our starting point, but conceptually, this model can have more than is currently or makes sense to have in A&AI.
So, POMBA needs to evolve independently of AAI, even though it has common ancestry.
As we are having discussions about the AAI Schema Services with a view to future dynamic schema updates, e.g. via SDC modelling, we will need to be aware of the downstream impacts of such a change.
I think this needs to be added to the use cases for AAI Schema Services (and GraphGraph?) behaviours.
Status | ||||
---|---|---|---|---|
|
While helping UUI team to debug a family of related issues such as
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Code Block | ||
---|---|---|
| ||
{
ext-aai-network: {
"aai-id": "VDF",
"schema-version": "version-1",
"resource-version": "1542082337501"
}
} |
vs
Code Block | ||
---|---|---|
| ||
{
"aai-id": "VDF",
"schema-version": "version-1",
"resource-version": "1542082337501"
} |
Initially, I thought this could be due to XML-to-JSON translation error, but it could also have been a copy-paste from the output of a GET, e.g.
Code Block |
---|
> GET /aai/v14/network/ext-aai-networks HTTP/1.1
{
"ext-aai-network": [
{
"aai-id": "createAndDelete",
"schema-version": "version-1",
"resource-version": "1542247826990"
},
{
"aai-id": "aaiId-2",
"schema-version": "version-2",
"resource-version": "1542029867153"
}
]
} |
In the spirit of Postel's Law ("be liberal in what you accept"), could/should AAI accept both variations of the input data above?
There could be additional validation that the element name inside the request body matches the element name in the URL of the API (similar validation is already performed for the key ID value).
It would also allow for simple methods of data transfer/migration, where AAI output is directly accepted as AAI input.
Status | ||||
---|---|---|---|---|
|
Bin Yang and Lianhao Lu (Deactivated) wrote in
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
HPA telemetry data collection and make it persistent in A&AI, from which OOF can leverage during its decision making process.and
1. Multi-cloud to collect the data from time-series data services like Prometheus (http://prometheus.io) or openstack Gnocchi, and push them to A&AI based on the data recording & aggregation rules.
and
The reason why we propose here is that VES mechanism doesn't store the telemetry data into A&AI. And OOF now can only get those kind of data from A&AI.
Some concerns:
- how much additional load will this place on AAI?
- will AAI cope with this load?
- is AAI suitable for "time-series data"?
- is "telemetry data" considered to be "active & available inventory"?
- should OOF access the telemetry/time-series data via other means (not AAI)?
Status | ||||
---|---|---|---|---|
|
Dénes Németh wrote in
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
In think it would be good to answer what is the meaning of the field (collection of PEMs of the CA xor URL)
Questions:
1. Is AAI intended to strictly prescribe how the fields are used and what contents are in the values?
2. Or does AAI simply reflect the wishes of all the client projects that use it to store and retrieve data?
Even if (1) is true, AAI is not really in any position to enforce how clients use the data, so really (2) is always true and we need to consult the original producers of the data and the ultimate consumers of the data to document their intended meanings.
How do we push to have documentation on the purpose and meaning of the fields in AAI?
Where does all this documentation go?
Should the documentation be backed up by validation code?
Status | ||||
---|---|---|---|---|
|
1st Nov 2018
Why is the pod for HAproxy not named (hard to figure out that there is a proxy), unsure how it is logging and where
James Forsyth creates JIRA tasks to 1. to have the pod named 2. add logging to the proxy
Jira Legacy | ||||||||
---|---|---|---|---|---|---|---|---|
|
Jira Legacy | ||||||||
---|---|---|---|---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
Modelling team having Service Instance thoughts by Chesla Wechsler, which will affect AAI schema.
Also referred from comments on ONAP R4+ Service Modeling Discussion Calls
9)“vhn-portal-url”?“Bandwidth”,"QoS","SLA",etc, attribtutes that not all the services need but still need to be stored in certain service instance: stored as a schemaless field on the service-instance vertex (Chesla will follow up) (my concerns: according to the call, is that ok if we set a "global-type of service" and a "customized-type of service", then mapped it with internal descriptor, and A&AI's model only stores global type in service instance's schema, but stores the customer-faced attributes of service in a schemaless way? Chesla Wechsler Kevin Scaggs Andy Mayer)
See also Modeling 2018-11-13
The service-instance already uses a "metadata" relationship, which can store an arbitrary list of key-value pairs, but perhaps AAI should extend the use of the "properties" element, which is also an arbitrary list of name-value pairs or the "extra-properties" element, which is also an arbitrary list of name-value pairs.
15th Nov: Having seen Chesla's presentation, it should be called "Model-driven schema" rather than "schemaless" behaviour, since the idea is that the changes are controlled by SDC modelling. Seems aligned to the eventual goal in AAI Schema Service Use Case Proposals and AAI Schema Service.
Status | ||
---|---|---|
|
Security subcommittee has recommended teams move away from jackson, and will be presenting alternatives and asking for an assessment from each project. Our team will need to do an analysis - this would not be trivial, especially given how many of our repos are impacted. As of now, this would be a very high LOE for the team, we need to understand what the recommendation from the SECCOM is before we can provide better details on what the LOE would be.
Updated: Using Google gson vs FasterXML Jackson
10th Oct: Present to Seccom meeting
15th Oct: Present to PTL meeting
31st Oct: Debatable whether the cost of swapping Cassandra and changing code is worth the benefit of removing Jackson from the vulnerabilities list.
On-Hold until James Forsyth consults with other PTLs: PTL 2018-11-05
Status | ||||
---|---|---|---|---|
|
Guangrong Fu mentioned AAI in Baseline Measurements based on Testing Results:
- Cache the AAI data and refresh them periodically so that Holmes won't have to make an HTTP call to AAI every time it tries to correlate one alarm to another.
The problem for caching is how to know when to update the cached data. Even though the access time may be fast for Holmes, the risk is using out-of-date data, so the correlations will be wrong anyway. Also, duplicating the AAI data outside of AAI is probably a bad architectural decision. Making AAI faster for these use cases would be better.
Has there been a performance analysis of where the time is spent? Could it help to use ElasticSearch (e.g. as in sparky)? Should Holmes have a batch interface to get more AAI data in fewer calls? Or a better correlation API that results in fewer calls?
31st Oct: https://lists.onap.org/g/onap-discuss/topic/27805753
1st Nov:
- Guangrong Fu will try custom queries for queries that took to long to return
- The hardware (mainly storage) influences the query speed - need to find out what hardware was the speed test conducted on (Guangrong Fu will provide HW specs)
Status | ||
---|---|---|
|
There are 2 types of logging in the services
- one read from EELFManager
- the other Logger log = Logger.getLogger( ...
Is that correct? Shouldn't there be just 1 type?
1st Nov:
After Casablanca release investigate logging guidelines and figure out what library to use in order to unify logging within A&AI
Status | ||||
---|---|---|---|---|
|
Could we disable unused (i.e. not integrated) A&AI web services, so that the deployment is faster and the resource footprint is smaller? e.g. Champ (any other ws?)
Motivation: Decrease the resource footprint for A&AI (ONAP) deployments
Idea: we could support 2 different deployments 1. full (normal) deployment and 2. barebones deployment. The point of the "barebone" deployment would be to deploy only the essential services necessary for proper functioning of A&AI (leaving out services like cacher, sparky, graphadmin, having 1 cassandra node instead of 3 or 5 etc).
In order to reduce hardware/cloud costs (mainly the memory footprint) it could be beneficial to support a minimalistic A&AI deployment.
1st Nov:
Venkata Harish Kajur Former user (Deleted) - investigate how to disable/enable charts in A&AI so we can create a core group of pods which handle the use-cases and than extended group will all the services. Consider a group of unused/unintegrated services (like Champ). Consider other possible groups (like GUI?)
Status | ||||
---|---|---|---|---|
|
- Who is responsible for the project?
- What is the roadmap for the project?
- Who will do the integration?
Status | ||||
---|---|---|---|---|
|
Dublin AAI changes in support of 5g use cases.
Link for presentation: 5G - PNF Plug and Play (Casablanca carry-over items)