Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

GraphAdmin microservice is based on Spring boot and deployed using docker which is primarily responsible for handling admin level tasks such as taking snapshot of the disk and running gremlin queries.

Repository is found here - https://gerrit.onap.org/r/#/admin/projects/aai/graphadmin

The jenkins job found here - https://jenkins.onap.org/view/aai-graphadmin/

DB tools from resources and aai core are moved the new GraphAdmin MS which includes :

  •     DataGrooming
  •     DataSnapshot
  •     DupeTool
  •     ForceDeleteTool
  •     GraphMLTokens
  •     SchemaGenerator
  •     SchemaMod
  •     SchemamodInternal
  •     UpdateEdgeTags


Migration code from the resources api is moved to the new GraphAdmin MS.

Gremlin queries via the "query" endpoint is moved to the graphadmin mS via a new endpoint "dbquery"

Interfaces with Hbase/Cassandra via TinkerPop APIs using JanusGraph
CreateDBSchema

Name - createDBSchema.sh

Purpose - This tool creates the graph schema for the database

Logging config - graphAdmin logback.xml

Logs - cd /opt/app/aai-graphadmin/logs/createDBSchema/error.log ,metrics.log, debug.log.

Output - on console

Usage:

For running CreateDbSchema, first take a snaphot and check the log, then run the script and again check the logs.

docker-compose -f /opt/test-config/docker-compose.yaml aai-graphadmin createDBSchema.sh <params>


Data Grooming

Name - dataGrooming.sh

Purpose - This tool is built to look for data problems that need to be cleaned up for the database to operate properly.   This tool is run automatically from a cron once every few hours.   If run with the "-autoFix" option, it will automatically run once, and then, if it finds some delete candidates, it will sleep for 7 minutes and run again.  It will automatically delete the deleteCandidates found if they are found again on the second run.  Note, we use the "-autoFix" option when running from the cron.  Any time that delete-candidates are found, an exception is thrown and logged.  The error thrown is: "AAI_6123::Bad Data found by DataGrooming Tool - Investigate".   Whoever is monitoring AAI logs will need to alert Tier-4 if delete candidates are found and not cleaned up by the cron.  The list of candidate deletes (nodes or edges or both) in the file can be edited and the script can then be run with this file as an input to the script.   If it re-verifies the problem-candidates, it will delete them.

Logging config - graphAdmin logback.xml

Logs - dataGrooming/error.log and dataGrooming/debug.log

Output - (in the /opt/app/aai-graphadmin/logs/data/dataGrooming  directory): dataGrooming.FULL.YYYYMMDDHHSS.out  or  dataGrooming.PARTIAL.YYYMMDDHHSS.out

Usage:

docker-compose -f /opt/test-config/docker-compose-app.yaml dataGrooming.sh <params>


Previous output file

For example to generate a list of delete candidates:  

docker-compose -f /opt/test-config/docker-compose-app.yaml dataGrooming.sh

To actually delete an existing generated candidates list of data:

 docker-compose -f /opt/test-config/docker-compose-app.yaml dataGrooming.sh dataGrooming.sh -f dataGrooming.20180924.out



dataGrooming.20180924.out is created from the previous command to generate list of delete candidates inside the container /opt/app/aai-graphadmin/logs/data/dataGrooming/ folder

There are quite a few command line parameters that let you control what happens with this tool.  They are all optional. They are documented as comments in the script itself.  Here is a list of them; for details on how to use them, see the comments at the top of the script itself, or the T-space write up of the tool.   "-f oldfileName", "-autoFix", "-sleepMinutes nn", "-edgesOnly", "-dontFixOrphans", "-maxFix", "skipHostCheck", "-singleCommits", "-dupeCheckOff", "-dupeFixOn", "-timeWindowMinutes", "-skipEdgeCheck", "-ghoste2CheckOff", "-ghost2FixOn".

 NOTE - if the optional timeWindowMinutes parameter is used, the output file will have the text "PARTIAL" as part of the name, if this parameter is not used, then the output file will have the text "FULL" in it.                  

Ie. output file names could look like:     dataGrooming.FULL.201801171710.out  or  dataGrooming.PARTIAL.201801152110.out


Sample error from the dataGrooming.log that tells you you'd need to investigate a data problem (and what file needs to be looked into):

2015-09-10T14:54:51-04:00|c6533401-0a33-4ea7-a039-d2fd2dfd6151||||AAIDATAGROOM:AAI:dataGrooming||INFO|||mtinjvmsdn30|135.25.186.202|DataGrooming|1762077|co=aaidbgen:ss=y:ec=0: >  Look at : [oam-network] ...:emsg=See file: [/opt/aai/logs/dataGrooming.201509101825.out] and investigate delete candidates.:ERROR=ErrorObject [errorCode=6123, errorText=Bad Data found by DataGrooming Tool - Investigate, restErrorCode=3000, httpResponseCode=Internal Server Error, severity=ERROR, disposition=5, category=4]|


Sample of what part of the output file looks like is below.  Note - besides the candidate list, there are other sections of data which should help us understand the delete-candidates:

 ============ Summary ==============

Deleted this many delete candidates from previous run =  0

Total number of nodes looked at =  2617

Ghost Nodes identified = 5

Orphan Nodes identified =  0

Missing Dependent Edge (but not orphaned) node count = 0

Duplicates count =  0

MisMatching Label/aai-node-type count =  0

 ------------- Delete Candidates ---------

DeleteCandidate: Phantom Vid = [491749424]

DeleteCandidate: Phantom Vid = [147584]

DeleteCandidate: Phantom Vid = [98672]

DeleteCandidate: Phantom Vid = [164200]

DeleteCandidate: Phantom Vid = [491798944]

Sample of what part of the output file looks like which includes detail of a pair of duplicate nodes. Note that it gives the details of the two nodes including whatever IN/OUT edges it finds.  For this example, the tool cannot determine which of the two nodes (note - some groups can have more than two nodes) is the best one to delete.  You can tell because of these two lines:

           For this group of duplicates, could not tell which one to keep.

           >>> This group needs to be taken care of with a manual/forced-delete.


 It is up to Tier support to look at the duplicates in the list and make a determination.  See notes for the "DupeTool" for pointers on how to determine which duplicate node to delete. Note also - Typically, the "forceDelete" tool must be used to delete nodes like this since they cannot be reached via the normal REST API - so the "normal" delete tool cannot be used on them.


 ------------- Duplicates:

 --- Duplicate Group # 1 Detail -----------

    >> Duplicate Group # 1  Node # 0 ----

 AAINodeType/VtxID for this Node = [tenant/34656336]

 Property Detail:

Prop: [last-mod-source-of-truth], val = [Robot]

Prop: [aai-node-type], val = [tenant]

Prop: [aai-created-ts], val = [1509635306703]

Prop: [aai-last-mod-ts], val = [1509635306705]

Prop: [source-of-truth], val = [Robot]

Prop: [aai-uri], val = [/cloud-infrastructure/cloud-regions/cloud-region/cloudowner-AAI-vm230w/cloudregion-id-AAI-vm230w/tenants/tenant/tenant-10124-vm230w]

Prop: [tenant-id], val = [tenant-10124-vm230w]

Prop: [tenant-name], val = [tenant-namevm230w]

Prop: [resource-version], val = [1509635306705]

Found an IN edge (has) to this vertex from a [cloud-region] node with VtxId = 5984488

Found an OUT edge (owns) from this vertex to a [vserver] node with VtxId = 23040080

    >> Duplicate Group # 1  Node # 1 ----

 AAINodeType/VtxID for this Node = [tenant/122888264]

 Property Detail:

Prop: [last-mod-source-of-truth], val = [Robot]

Prop: [aai-node-type], val = [tenant]

Prop: [aai-created-ts], val = [1511892771574]

Prop: [aai-last-mod-ts], val = [1511892771579]

Prop: [source-of-truth], val = [Robot]

Prop: [aai-uri], val = [/cloud-infrastructure/cloud-regions/cloud-region/testowner1/testregion1/tenants/tenant/testtenant1]

Prop: [tenant-id], val = [testtenant1]

Prop: [tenant-name], val = [testtenantname]

Prop: [resource-version], val = [1511892771579]

Found an IN edge (org.onap.relationships.inventory.BelongsTo) to this vertex from a [vserver] node with VtxId = 122892360

Found an OUT edge (org.onap.relationships.inventory.BelongsTo) from this vertex to a [cloud-region] node with VtxId = 5984488


 For this group of duplicates, could not tell which one to keep.

 >>> This group needs to be taken care of with a manual/forced-delete.
Data Snapshot

Name - dataSnapShot.sh

Purpose - This tool creates the graph schema for the database

Logging config - graphAdmin logback.xml

Logs - opt/app/aai-graphadmin/logs/dataSnapshot/error.log and /opt/app/aai-graphadmin/logs/dataSnapshot/debug.log

Output - on console

Usage:

    This will take a data snapshot and create graphson files using JanusGraph APIs. Please note there is a different snapshot taken at the storage backend using the tools that are backend specific.

    If you want to run the tool while he GraphAdmin application is up, use the execTool.sh to run the tool as below.

/opt/app/aai-graphadmin/execTool.sh dataSnapshot.sh

    ${PROJECT_HOME}/logs/misc/run_dataSnapshot.log.$(date +\%Y-\%m-\%d) 2>&1


ls -ltr /opt/app/aai-graphadmin/logs/data/dataSnapshots/


  • No labels