The cloud-region schema element is unusual in that it has a two-part key i.e. "cloud-owner" and "cloud-region-id". There are not many other usages of it ("ctag-pool" , "service-capability" and "route-target" are three others, out of over 100 other schema elements)
Is it possible to enhance the error message to indicate that part of the key value is missing from the relationship-data?
Is it time to deprecate the relationship-data and switch over to using the related-link only?
Is there any modeling guidance that would steer new designs away from using multi-part key for schema elements?
Are there other caveats to using the multi-part key design for schema elements?
Can we get feedback from Chandra Cinthalaon the key design for multi-part keys and whether this will be more common going forward?
From: CINTHALA, CHANDRA [mailto:cc1196@att.com] Sent: Tuesday, April 30, 2019 12:16 AM To: Keong Lim <Keong.Lim@huawei.com>; FORSYTH, JAMES <jf2512@att.com> Cc: AGGARWAL, MANISHA <amanisha@att.com> Subject: Re: [confluence] Keong Lim has assigned tasks to you in "2019-05-02 AAI Developers Meeting"
Keong,
I think we have no plans to deprecate the relation-data option the in the A&AI relationship payload.
It's another option for the client to specify the relationship.
The below put_vLB.sh script can be used to submit the vLB data to A&AI in order to run ConfigScaleOut use case. This script and referenced JSON files are used on an AAI instance where the cloud-region and tenant are already defined.
fragility of static import data file w.r.t. schema changes and version upgrades for each ONAP release?
how "common" is this knowledge, i.e. what to load, where to get it, who else should be using it, etc?
should it be automated/scripted, rather than manual steps to bootstrap?
should it be a simulator program or test harness, rather than a static data file?
should it reside within AAI CI/CD jobs for maintenance and upgrade of schema versions?
who maintains the data itself? Is there a "data repository" which can be delegated to other teams, e.g. like documentation repository links in git?
how many other teams have similar private stashes of AAI bootstrap data?
does it need to be published at a stable URL to avoid linkrot?
Possible solution/action:
Look at the examples API and possibly enhance it to get the desired behavior
Collect all the known data samples, commit to test-config repo, update the teams/wiki to point to test-config repo instead of keeping private stash of AAI data
In think it would be good to answer what is the meaning of the field (collection of PEMs of the CA xor URL)
Questions:
1. Is AAI intended to strictly prescribe how the fields are used and what contents are in the values? 2. Or does AAI simply reflect the wishes of all the client projects that use it to store and retrieve data?
Even if (1) is true, AAI is not really in any position to enforce how clients use the data, so really (2) is always true and we need to consult the original producers of the data and the ultimate consumers of the data to document their intended meanings.
How do we push to have documentation on the purpose and meaning of the fields in AAI?
Where does all this documentation go?
Should the documentation be backed up by validation code?
if I had some AAI data with attributes that are strings but nominally contain date/timestamps, is there a way to query for a particular range of values?
is there a way to do partial match? regex? PUT /aai/v13/query?format=raw
Could we disable unused (i.e. not integrated) A&AI web services, so that the deployment is faster and the resource footprint is smaller? e.g. Champ (any other ws?)
Motivation: Decrease the resource footprint for A&AI (ONAP) deployments
Idea: we could support 2 different deployments 1. full (normal) deployment and 2. barebones deployment. The point of the "barebone" deployment would be to deploy only the essential services necessary for proper functioning of A&AI (leaving out services like cacher, sparky, graphadmin, having 1 cassandra node instead of 3 or 5 etc).
In order to reduce hardware/cloud costs (mainly the memory footprint) it could be beneficial to support a minimalistic A&AI deployment.
1st Nov:
Venkata Harish KajurFormer user (Deleted) - investigate how to disable/enable charts in A&AI so we can create a core group of pods which handle the use-cases and than extended group will all the services. Consider a group of unused/unintegrated services (like Champ). Consider other possible groups (like GUI?)
James Forsythcreates a JIRA ticket to define the list of AAI subprojects and create the categories (essential, full "experience") for the OOM deployment
The schema-service is ready. Currently it provides file-sharing capabilities in terms of schema/edgerule files.
In order for GraphGraph to take advantage of the schema parsing/processing in schema-service additional abstractions have to be implemented on top of the crude file2string functionality currently in schema-service.
Venkata Harish Kajurwill ask Manisha Aggarwalif the current functionality of the schema-service is the final version for Dublin and if there will be further enhancements in next releases.
list of all schema nodes/items (like vserver, tenant, p-interfaces..) for example on a REST path /schemas/{schema}/nodes
all relevant attributes of a given node/item for example on REST path /schemas/{schema}/nodes/{node}
edges/relationships with their attributes between schema nodes/items (for example on REST path /schemas/{schema}/edges where you specify a "from" "to" schema items as query params)
subgraph of the schema, where you specify 1. initial (root) items/node (like tenant or vserver) 2. schema version and 3. number of parent/cousin/child hops from the initial item/node
all paths in a given schema graph between 2 items/nodes (like vserver and tenant) for a given schema version
edges in the schema graph should be composed of edges in the schema file + edges created from the edgerules file
edges should contain basic attributes when delivered via the subgraph call (like parent/child relationship and important properties from edgerules) and have additional (or all) attributes when queries via /schemas/{schema}/edges REST endpoint.
20. Mar 2019:
Open questions for schema-service:
what is the current implemented functionality?
what are the business use-cases in ONAP for schema-service? Description of functionality in relation to other services/projects is needed. In other words who needs it and why?
if no business use-cases can be formulated we should consider removing schema-service from A&AI and replacing it with standard file-sharing mechanisms.
AAF will generate certificates to the be used by the containers at startup; AAI services should use the run-time generated certs instead of the ones that are in the repos or oom charts.
In dublin the services will mount a volume with certificates. This is on the roadmap for Dublin as a feature.
is this for all service and/or HAProxy?
Where are the certificates coming from (OOM/gerrit/generated by AAF)
James Forsythwill ask Jonathan Gatham when the certificate init image is going to be available in ONAP and wether it is documented
how to minimise impact of the transition from pnf-name as unique to pnf-id as unique key?
would the v14 URL be different from the v15 URL? would both paths be equally supported for GET/PUT/etc?
what forwards-compatibility or backwards-compatibility will be supported?
how to migrate forwards or backwards database versions, ONAP versions, etc, across this transition?
who is going to implement it? Test it?
what is the impact of this not going ahead?
William LaMont will check for existing migration utility that handles this use case (changing the key from one existing attribute to another). Changes to pnf object in all oxm versions would be needed, and a migration similar to what was done in UrlMigration but limited to pnf node-type to update the aai-uri, and a schema mode to add an index on pnf-id.
James Forsyth will socialize the breaking change on the PNF in the next PTL call so clients can prepare to do a search for ?pnf-name=${pnf-name} instead of /pnfs/pnf/${pnf-name}. They also need to handle doing the PUT operation differently - Added to PTL agenda PTL 2019-02-19
f. AAI team wanted to get notified of AAI Cassandra issues automatically i. Can we setup a Nagios or equivalent to monitor both rancher/k8 and the applications for rancher/k8 issues ?
Keep an eye out for new issues!
This should be part of a larger A&AI monitoring and failure prevention initiative!
Use MUSIC or any other alternative in memory caching like Redis etc?
Optimize flavor retrieval from A&AI and Cache the information if necessary
See also
Jira Legacy
server
System Jira
serverId
4733707d-2057-3a0f-ae5e-4fd8aff50176
key
OPTFRA-268
/
Jira Legacy
server
System Jira
serverId
4733707d-2057-3a0f-ae5e-4fd8aff50176
key
OPTFRA-291
Similarly to the "AAI too slow for Holmes" item below, this introduction of extra caching of AAI data is a worrisome development and sad indictment of the performance of the system architecture.
For holmes, we could possibly create a custom query to address it.
What can we do about this?
Would the AAI Cacher
Jira Legacy
server
System Jira
serverId
4733707d-2057-3a0f-ae5e-4fd8aff50176
key
AAI-1337
help to improve performance?
2021
MultiCloud usage of AAI for HPA telemetry/time-series data to OOF
HPA telemetry data collection and make it persistent in A&AI, from which OOF can leverage during its decision making process.
and
1. Multi-cloud to collect the data from time-series data services like Prometheus (http://prometheus.io) or openstack Gnocchi, and push them to A&AI based on the data recording & aggregation rules.
and
The reason why we propose here is that VES mechanism doesn't store the telemetry data into A&AI. And OOF now can only get those kind of data from A&AI.
Some concerns:
how much additional load will this place on AAI?
will AAI cope with this load?
is AAI suitable for "time-series data"?
is "telemetry data" considered to be "active & available inventory"?
should OOF access the telemetry/time-series data via other means (not AAI)?
AAI API latency (4~6 second per request as benchmarked in CMCC lab) could be a problem
what would be the approach to backup an entire ONAP instance particualarly SDC, AAI, SDNC data ? would it be a script with all the references to the helm deploy releases or something that does a helm list and then for each entry does the ark backup ?
What is the AAI strategy for backup and restore?
What is the overall ONAP strategy for backup and restore?
Should it be unified with the data migration strategy as per "Hbase to Cassandra migration" on 2018-11-14 AAI Meeting Notes?
James Forsythwill raise the topic of having backups and restore functionality in ONAP - if it is feasible, on the roadmap and what others PTL think
Jimmy didn't directly raise the topic but there was movement - Keong Lim asked "if istio service mesh is a no-go, is there a replacement for secure onap communications? is backup/restore/upgradability included in s3p?"