Testing Model Ingestion
Typically, the best way to test the model ingestion flow is to have a fully functioning ONAP deployment, and to distribute a model from SDC. However, in dev scenerios where an AAI is installed in isolation, these instructions will allow you to test model ingestion using a debug endpoint in place of the SDC.
Enable the debug API on the model-loader service
The model-loader service is the entry point for ingesting new models. Ensure that the following option is configured in the model-loader.properties configuration file:
ml.debug.INGEST_SIMULATOR=enabled
In case of isolated AAI installation it is convenient to add following option to config:
ml.distribution.ASDC_CONNECTION_DISABLE=true
in order to prevent AAI to try to connect to the SDC.
If you're deploying ONAP via OOM the file is located at oom/kubernetes/aai/charts/aai-modelloader/resources/config/model-loader.properties.
After changing the model-loader.properties aai chart has to be rebuild and redeployed in order to use new config.
Another option is to modify dev-aai-aai-modelloader-prop configMap directly and make aai-modelloader pods recreated.
Debug API
Send the following request to the the http port on the model-loader service:
POST http://{{host}}:{{port}}/services/model-loader/v1/model-service/ingestModel/test-csar/1.0
The body of the request should contain the base64 encoded csar file containing the model. You can use the following postman collection to test with an example csar.
Model Ingestion.postman_collection.json - collection to use with postman
models.csar - csar data extracted from postman collection for convenience to use with curl.
General steps for model-loader testing in case of testing in the ONAP environment:
1. Find aai-modelloader pod:
$>kubectl get pods | grep aai-modelloader
dev-aai-aai-modelloader-79b4f68f56-8zqrj 2/2 Running 0 19h
Debug port is not exposed outside of model-loader pod/container so one needs to call Debug API from inside of the pod in case of AAI running in the ONAP environment or expose debug port explicitly (in case of standalone AAI).
2. Get container's shell:
$>kubectl exec -it dev-aai-aai-modelloader-79b4f68f56-8zqrj -c aai-modelloader -- bash
3. Create csar file inside aai-modelloader container with the content copied from models.csar:
root@dev-aai-aai-modelloader-79b4f68f56-8zqrj:/# vi ~/models.csar
4. Call AAI API using curl, providing correct path to models.csar file:
root@dev-aai-aai-modelloader-79b4f68f56-8zqrj:/# curl -k -X POST -H "X-FromAppId:TEST-FromAppId" -H "X-TransactionId:TEST-TransactionId" -H 'Content-Type: text/plain' -d @models.csar http://localhost:9500/services/model-loader/v1/model-service/ingestModel/service-S1-csar/1.0
5. Check API call result:
root@dev-aai-aai-modelloader-79b4f68f56-8zqrj:/# {"statusType":"OK","entity":null,"entityType":null,"status":200,"metadata":{}}
6. Check aai-modelloader logs:
2019-06-27 08:27:53.034 INFO 14 --- [qtp949057310-25] o.o.a.m.service.ModelLoaderService : MDLSVC0003I|MDLSVC0003I Distribution event: Deployment success was true|
Some useful hints and scripts for isolated AAI deployment can be found in the AAI test-config repo.