Page for general findings around the investigation of the optimal path for CLAMP and Tosca Control Loop Integration.
CLAMP Code Walkthrough videos can be found here
Tosca handling in CLAMP vs Tosca Control Loop - Findings
CLAMP
- CLAMP uses models defined within the repository for definition of various object types like: Loop and LoopElement.
- Javax persistence is used to define the entities that map to tables in the DB.
- Repository pattern is used for database access with JPARepository.
- GSON is used in some cases for serialization/deserialization.
CSAR from SDC Flow
- CSAR contains the DCAE blueprint to be used to deploy to DCAE but also contains a reference to a microservice policy type. CLAMP, on startup, pulls all the policy types and pulls them periodically after that (+pulls the PDP group/subgroup accepted deployment per model). Clamp creates LOOP TEMPLATES from in its database from the blueprints. CLAMP uses the reference from the CSAR to associate the relevant microservice policy type with a loop template. Thus, CLAMP knows which monitoring policy type to use for creation of a Loop instance.
- When deploying the SDC notif + CSAR, clamp does a get query to dcae to get the blueprint ID in the inventory. If it fails CLAMP rollbacks the transactions and aborts the SDC notification installation.
- During the installation, CLAMP also installs in its DB service information, like UUID VNF associated ... this is used later by the UI to help the user to easily configure the different policies. It supports multiple blueprints per VNF (defined in SDC). The VNFs are retrieved by CLAMP from the SDC notification.
- Although there is a comment in "src/main/java/org/onap/policy/clamp/loop/deploy/DcaeDeployParameters.java" saying multiple microservices are not currently possible, +-90% () of clamp code "should" support the multi-mircoservices per blueprint. Code was initially written by Nokia to extract multiple microservices from the blueprint BUT Dcae did not and still does not support that. Normally the multi-microservices "should" come from the new DCAE architecture. (edited - hope that's ok)
- The Blueprint Artifact in the CSAR is used to populate the "loop_template" db. These "loop_templates" are used to create instances. All interaction with the database is done using the SpringBoot "@Repository" pattern. Standard boilerplate db interaction is done in most cases with "JpaRepository" (find, save etc).
- However, it should be noted that, databases and tables are not automatically created by Spring. SQL scripts are used in CLAMP to do this. A script "start-db.sh" runs the mariadb docker container. Part of the functionality of the MariaDB image from DockerHub is that any scripts with extensions: .sh, .sql, .sql.gz, and .sql.xz that are found in /docker-entrypoint-initdb.d are executed on startup... So, CLAMP has several scripts for creation of databases, users, tables and test data. The "create-tables.sql" script is generated at build-time by the "hibernate52-ddl-maven-plugin" in the pom.xml.
- CLAMP also gets the policy type (based on the policy type id in the Blueprint) from the database. The types are then added to the "loop_element_models" table. The relationship between "loop_element_models" and "policy_models" is recorded in the "loopelementmodels_to_policymodels" table.
- The relationship between the loop element models and the loop templates is also recorded in the "looptemplates_to_loopelementmodels" table. In this way, the blueprint templates is associated with a microservice policy.
Loop Creation Flow
When a loop is created in UI or directly in backend:
- The loop is saved in the "loops" table.
- The microservice policy is created (with a generated name) in the "micro_service_policies" table along with the policy type associated with it and the json representation of the policy type.
- The relationship between the microservice policies and the loops is recorded in the "loops_to_microservicepolicies" table. This contains the loop names and the microservice policy names.
Update Loop Flow
It is possible to update a loop that has been created. This allows configuration of the policies attached to the microservice.
- Updating a microservice policy or an operational policy can be done in the the UI or by accessing the CLAMP backend directly.
- For the microservice policy, you can:
- Click on the SVG and a convenience JSON editor will popup and allow properties to be added and removed from the JSON policy configuration template.
- Send the JSON configuration of the policy directly to the CLAMP backend endpoint at "/updateMicroservicePolicy".
- In both cases, this populates the "configurations_json" field for the microservice in the "micro_service_policies" table.
- Serialization and Deserialization of the JSON going into the database is handled by MicroServicePolicies.java, which extends Policy.java.
Policy Deployment Flow
Deployment of the policy from the loop can be triggered by:
- Using the SUBMIT option in the frontend.
- Using the backend submit endpoint.
No body has to be provided to the endpoint. The payload for the policy endpoint is generated internally in CLAMP from data already present in the database. Among other things, a call to the CAMEL submit endpoint retrieves the relevant microservice policies from the "micro_service_policies" table in the database. This is passed to another flow responsible for creating the policy in the policy engine. The policy payload is generated with all of the relevant tosca parameters in "src/main/java/org/onap/policy/clamp/policy/PolicyPayload.java". Another flow calls the endpoint on the policy engine and sends the payload in the correct format.
Drawio | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Tosca Control Loop
Firstly, Tosca Control Loop (TCL) is a different idiom. To deploy policies and services, we commission and instantiate control loops in TCL. The state of these control loops is managed by the runtime, which distributes messages to the participants. The participants are then responsible for actually carrying out the actions on the relevant components - based on the instructions received from the runtime.
Throughout the Tosca Control Loop code and policy models:
- GSON is used for serialization/deserialization.
- JPA persistence is used for database interaction.
- Lombok is used to create boilerplate POJO code.
- DAO pattern is used for database access.
- Models unique to Tosca Control Loop are within the repository, others are in policy models.
Commissioning Tosca Handling
- In the Runtime, once the commissioning API receives a new Tosca template at the "/commission" endpoint, a call is made to the DatabasePolicyModelsProviderImpl in the "models" repo to deserialize the TOSCA and save it to the "controlloop" database.
- The database credentials and persistence units specified in the runtime configuration file are used for DB access.
- The TOSCA is written to the relevant table "ToscaServiceTemplate" and also to related tables. -Note that this table (along with many others) is created in the DB at startup as specified in the persistence.xml.
- In addition to being able to add a new service template via the "/commission" endpoint, it is also possible to add a "fragment" of a template to an existing one through the same endpoint. An example of a tosca file to be sent to commissioning is below.
- The Tosca defines types that will be used during the instantiation of the control loops.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
{ "tosca_definitions_version": "tosca_simple_yaml_1_1_0", "topology_template": { "policies": [ { "MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test": { "type": "onap.policies.monitoring.dcae-pm-subscription-handler", "type_version": "1.0.0", "properties": { "pmsh_policy": { "measurementGroups": [ { "measurementGroup": { "onap.datatypes.monitoring.measurementGroup": { "measurementTypes": [ { "measurementType": { "onap.datatypes.monitoring.measurementType": { "measurementType": "countera" } } }, { "measurementType": { "onap.datatypes.monitoring.measurementType": { "measurementType": "counterb" } } } ], "managedObjectDNsBasic": [ { "managedObjectDNsBasic": { "onap.datatypes.monitoring.managedObjectDNsBasic": { "DN": "dna" } } }, { "managedObjectDNsBasic": { "onap.datatypes.monitoring.managedObjectDNsBasic": { "DN": "dnb" } } } ] } } }, { "measurementGroup": { "onap.datatypes.monitoring.measurementGroup": { "measurementTypes": [ { "measurementType": { "onap.datatypes.monitoring.measurementType": { "measurementType": "counterc" } } }, { "measurementType": { "onap.datatypes.monitoring.measurementType": { "measurementType": "counterd" } } } ], "managedObjectDNsBasic": [ { "managedObjectDNsBasic": { "onap.datatypes.monitoring.managedObjectDNsBasic": { "DN": "dnc" } } }, { "managedObjectDNsBasic": { "onap.datatypes.monitoring.managedObjectDNsBasic": { "DN": "dnd" } } } ] } } } ], "fileBasedGP": 15, "fileLocation": "/pm/pm.xml", "subscriptionName": "subscriptiona", "administrativeState": "UNLOCKED", "nfFilter": { "onap.datatypes.monitoring.nfFilter": { "modelVersionIDs": [ "e80a6ae3-cafd-4d24-850d-e14c084a5ca9" ], "modelInvariantIDs": [ "5845y423-g654-6fju-po78-8n53154532k6", "7129e420-d396-4efb-af02-6b83499b12f8" ], "modelNames": [], "nfNames": [ "\"^pnf1.*\"" ] } } } }, "name": "MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test", "version": "1.0.0", "metadata": { "policy-id": "MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test", "policy-version": "1.0.0" } } } ] }, "name": "ToscaServiceTemplateSimple", "version": "1.0.0", "metadata": {} } |
Control Loop Instantiation
- The "/instantiation" endpoint is responsible for creating the control loops. It doesn't start the loops but does create them, based on the types that are defined in the Tosca Service Template.
- Once the control loops from the control loop list are validated, they are written to the "ControlLoop" table in the "controlloop" database. The control loop elements are also written to the "controlloopelement" table.
- Control Loops can be made "PASSIVE" (or ordered to move to some other state) by making a PUT call to the "instantiation/command" endpoint.
- Supervision sends the command to DMAAP, for the participants to consume.
- The participants send a message back through DMAAP with their IDs and other details. Supervision listens for these messages and registers the participants in the database.
- Regular status messages are sent by the participants to tell the runtime report the state of the policy or microservice. These statuses are updated in the control loop database.
Participant Tosca Handling
- Each Participant uses a Participant-Intermediary to listen for creation and update messages from the Runtime. These can be updates to the state of the participant or the state of control loops and control loop elements.
- If an update to a control loop is received, the participant parses the required elements from the Tosca Template using the models present in the policy/models repo and writes the relevant parts to the correct database.
- In the case of the Policy-Participant, the "controlLoopElementUpdate" of "ControlLoopElementHandler.java" is used to write policies and policy types to the Policy API database.
- Similarly, in the DCAE-Participant, the "controlLoopElementUpdate" of "ControlLoopElementHandler.java", carries out (in the current iteration from POC demo) "deploy" and "undeploy" actions in CLAMP for the microservice instance. CLAMP, in turn, writes the new data to the CLAMP database.
Drawio | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
- CLAMP is built around the fact that it would be receiving its blueprints and policy type information from SDC , in the form of a CSAR. Received components are then parsed and written to the database. This procedure could be aligned with Tosca Control Loop in the following ways:
- Alter the CSAR installer to construct the Tosca in a similar way to how the Tosca Control Loop does - so that it can be saved to the ControlLoop database.
- Having this installer as a separate, but accessible library might be better than just leaving it in CLAMP. In this way, other modules can access it.
- For example, the Runtime could access it and add the resultant Tosca to the Control Loop database. Instructions could then be distributed to the participants.
- If the DCAE participant is still using CLAMP, then the Runtime could pass the Blueprint from the CSAR to it - CLAMP could then deploy to DCAE.
- Alternatively, the DCAE participant could deploy directly to DCAE - but must have some method to adjust the configuration of the blueprint being deployed. At the moment, the CLAMP GUI provides that functionality but perhaps, in the future, Bruno's adjusted GUI could do the same.
- At the moment, the CLAMP UI is capable of making changes to loops' configuration with an JSON schema-based editor. CAMEL endpoints are then called to persist the changes in the DB. The service, if already deployed, must then be redeployed (Also provided by the GUI). Similar behaviour should be available in Tosca Control Loop. A commissioned Control Loop should be capable of being altered and updated using REST endpoints. If we want to use the UI to make the update in Tosca Control Loop, we would need to consider the following:
- The UI would have to have some convenient mechanism for altering the template. As templates can be very large, this could become unwieldy very quickly and not a pleasant experience for the user.
- A form-based JSON editor would be nice: https://jsonforms.io/
- Again, we should consider whether we will use the CLAMP BE for this. If we do, should we consider having a CLAMP participant. Runtime should be aware of all of the changes made to control loops. If we do not, the UI can make changes by calling the Runtime endpoints.
- This goes for policy editing as well as microservice configuration editing.
- From the database point of view, CLAMP uses Spring's "@Repository" pattern - where as Toscal Control Loop uses the "DAO" pattern - as does the policy framework. It does not make sense to use both of these.
- It makes more sense to use the one pattern and it is possible to use the DAO pattern in SpringBoot https://www.baeldung.com/java-dao-pattern
- As is now, Control Loops are represented in a different way in CLAMP and Tosca Control Loop. They use different databases and the database schemas are in no way similar
- The architecture of Tosca Control Loop should be the all changes, updates and alterations to control loops are made through the Runtime endpoints. CLAMP does not "fit" in this context - its backend acts an overall manager of the control loops and tosca templates. CLAMP and the Runtime would effectively be 2 managers - where only one is needed.
- Possible solutions:
- The CLAMP backend could be altered to access the Runtime endpoints - this would require extensive restructuring of response handling.
- Tosca Control Loop could be altered to use the CLAMP database/model structure. This would require rewriting and rearchitecting of Runtime, Participants and all other components.
- DCAE participant's current use of CLAMP
- At the moment and in the POC, the DCAE Participant accesses DCAE through CLAMP - taking template and control loop data from the Runtime, altering it and then passing it on to CLAMP endpoints. The Policy Participant accesses the Policy Framework. DCAE participant, arguably, should do the same.
- CLAMP has facillity to create and alter both miscroservices and policies. Should the Policy participant then interact with the Policy Framework through CLAMP?
- Possible solutions:
- Have DCAE Participant interact with DCAE directly.
- Continue to use CLAMP for this purpose - see also number 4 above for difficulties associated with this.
REST in CLAMP vs REST in Control Loop PoC
Findings
- LoopService and MicroservicePolicyService classes resemble our PoC Provider classes, such as Commissioning and Instantiation Provider where java code in regards to CRUD resides
Examples:
Code Block language java title LoopService.getLoop() public Loop getLoop(String loopName) { return loopsRepository.findById(loopName).orElse(null); }
Code Block language java title ControlLoopInstantiationProvider.GetControlLoops() public ControlLoops getControlLoops(String name, String version) throws PfModelException { ControlLoops controlLoops = new ControlLoops(); controlLoops.setControlLoopList(controlLoopProvider.getControlLoops(name, version)); return controlLoops; }
- The differences start here, as CLAMP uses the Spring Framework Repository Interfaces to handle database calls, where as PoC is using the code defined in policy-models in regards to Tosca and our own models and JPA classes for ControlLoop related objects.
i.e LoopsRepository extends JpaRepository (Spring Framework). Our providers are either created by us or use policy ones i.e. PolicyModelsProvider. - There seems to be a lot of specific methods for updating the loop templates, such as updateDcaeDeploymentFields() and addOperationalPolicy(). In our case that is handled within the TOSCA service template itself, by mending the template where these objects can be added as nodeTemplates(ControlLoopElements), without having to be specific to a DCAE control loop. Further info in TOSCA handling.
- Actual REST code is handled rather differently. In CLAMP LoopController class serves as a class to introduce Springboot Framework with the "@Controller" annotation, and methods defined in this class are just calling the LoopService Methods. This is to allow these methods to be more easily called by the CAMEL flows using :
e.g. "<to uri="bean:org.onap.policy.clamp.loop.LoopController?method=getLoop(${header.loopName})"/". In CAMEL the response, logging and error handling occurs, with the code just throwing the exception.
In our case, our REST code along with annotations, definitions and responses is in our Controllers, which call the provider methods to do the actual interaction. e.g. CommissioningController, which defines the path along with other java.wx.rs inputs.Examples:
Code Block language java title LoopController.getLoop() public Loop getLoop(String loopName) { return loopService.getLoop(loopName); }
Code Block language xml title CAMEL for getLoop collapse true <get uri="/v2/loop/{loopName}" outType="org.onap.policy.clamp.loop.Loop" produces="application/json"> <route> <removeHeaders pattern="*" excludePattern="loopName"/> <doTry> <to uri="bean:org.onap.policy.clamp.flow.log.FlowLogOperation?method=startLog(*, 'GET Loop')"/> <to uri="bean:org.onap.policy.clamp.authorization.AuthorizationController?method=authorize(*,'cl','','read')"/> <to uri="bean:org.onap.policy.clamp.loop.LoopController?method=getLoop(${header.loopName})"/> <to uri="bean:org.onap.policy.clamp.flow.log.FlowLogOperation?method=endLog()"/> <doCatch> <exception>java.lang.Exception</exception> <handled> <constant>true</constant> </handled> <to uri="bean:org.onap.policy.clamp.flow.log.FlowLogOperation?method=errorLog()"/> <log loggingLevel="ERROR" message="GET Loop request failed for loop: ${header.loopName}, ${exception.stacktrace}"/> <setHeader name="CamelHttpResponseCode"> <constant>500</constant> </setHeader> <setBody> <simple>GET Loop FAILED</simple> </setBody> </doCatch> </doTry> </route> </get>
Code Block language java title Java Code for PoC way of handling REST collapse true @GET @Path("/instantiation") @ApiOperation(value = "Query details of the requested control loops", notes = "Queries details of the requested control loops, returning all control loop details", response = ControlLoops.class, tags = { "Clamp control loop Instantiation API" }, authorizations = @Authorization(value = AUTHORIZATION_TYPE), responseHeaders = { @ResponseHeader( name = VERSION_MINOR_NAME, description = VERSION_MINOR_DESCRIPTION, response = String.class), @ResponseHeader(name = VERSION_PATCH_NAME, description = VERSION_PATCH_DESCRIPTION, response = String.class), @ResponseHeader(name = VERSION_LATEST_NAME, description = VERSION_LATEST_DESCRIPTION, response = String.class), @ResponseHeader(name = REQUEST_ID_NAME, description = REQUEST_ID_HDR_DESCRIPTION, response = UUID.class)}, extensions = { @Extension( name = EXTENSION_NAME, properties = { @ExtensionProperty(name = API_VERSION_NAME, value = API_VERSION), @ExtensionProperty(name = LAST_MOD_NAME, value = LAST_MOD_RELEASE) } ) } ) @ApiResponses( value = { @ApiResponse(code = AUTHENTICATION_ERROR_CODE, message = AUTHENTICATION_ERROR_MESSAGE), @ApiResponse(code = AUTHORIZATION_ERROR_CODE, message = AUTHORIZATION_ERROR_MESSAGE), @ApiResponse(code = SERVER_ERROR_CODE, message = SERVER_ERROR_MESSAGE) } ) // @formatter:on public Response query( @HeaderParam(REQUEST_ID_NAME) @ApiParam(REQUEST_ID_PARAM_DESCRIPTION) UUID requestId, @ApiParam(value = "Control Loop definition name", required = true) @QueryParam("name") String name, @ApiParam(value = "Control Loop definition version", required = true) @QueryParam("version") String version) { try { ControlLoops response = provider.getControlLoops(name, version); return addLoggingHeaders(addVersionControlHeaders(Response.status(Status.OK)), requestId).entity(response) .build(); } catch (PfModelRuntimeException | PfModelException e) { LOGGER.warn("commisssioning of control loop failed", e); return createInstantiationErrorResponse(e, requestId); } }
- My knowledge on Spring is limited. Overall it seems more of a preference. Switching PoC way to handle REST to Spring/CAMEL as is in CLAMP should not be a massive hurdle, unless if Spring can only handle database queries which are defined by Spring Interfaces, and we cannot create "@Controller" for our provider(Service) code which uses our own JPA code for interacting with the DB
- I also want to mention that Spring/Camel is handling the DB transactions for US. Each time an endpoint is hit, the transaction is automatically opened, you can therefore (with specific annotations) abort, open new transac within the first one, control the transac if needed ....etc ... When no exception is raised the commit is done when the call ends. I see SpringBoot as an enterprise java Container that comes with JPA/JTA/Beans/Rest +..... functionalities out of the box (like JBOSS).
Potential Difficulties/Issues to be discussed
- While it is possible to convert Tosca PoC code into spring, it uses Policy Models Code to interact with Tosca Service Templates. This causes several difficulties:
- To be properly used, that code base would need to be converted into spring as well, which would either:
- Have to duplicate work done in policy models to have a spring style code in our code base and policy style code in models to be used by other policy components.
- Adjust the code in policy models to be spring style, which would then cause other code changes to different policy components, possibly creating havoc :
- Involves a lot of policy components, would need to be a project overhaul (Major decision)
- Would it really make sense on the policy project level, as code base now is rather small and works well (Tried and tested)
- If we only change our PoC code to Spring, it does not make much sense, as:
- We can not use Spring Transaction handling with some of the code base being done policy style (One of the main advantages of using Spring)
- We need to create a separate DB connection, meaning we would have two (Doesn't seem like a great way forward, but it would work)
- The code which does not use any policy models code, and is standalone can work, for example participant code. They run by themselves, with the code defined within the participant.
- I am not sure of Camel use for REST. While it does not seem to be majorly complicated, most of that code can we written in Java, in addition, using Spring Annotations. Looking in future general use, most of the "routes" such as dcae/policy will be handled by participants themselves, thus there will be no rest code in the main backend code for that, meaning there is even less incentive to use Camel for REST in the long run.
- To be properly used, that code base would need to be converted into spring as well, which would either:
- Possible potential solutions for back end code:
- Pull in and create a Spring version of policy-models/tosca in our project, leaving policy-models/tosca to be used by policy.
- This would allow for a fully Spring application, could implement spring way of models and DB interactions, fully utilising its functionality.
- This deviates from the other Policy Framework components, does not use any of the policy-common, etc, making it a standalone as such, similar to previous CLAMP
- Leave and use policy stuff as is, only convert our stuff to Spring.
- Would not affect policy components
- Rather awkward for our own code base, would need to make specific connections to join the dots between differences. (Example can be found in Francesco's reviews)
- End up doing the code the policy framework way and take out Spring.
- Familiar code, although missing out on Spring features such as transaction handling
- Familiar code, although missing out on Spring features such as transaction handling
- Pull in and create a Spring version of policy-models/tosca in our project, leaving policy-models/tosca to be used by policy.
- Possible potential solutions for REST code:
- Use Camel, as is done in CLAMP now.
- I believe camel can serve a purpose, but especially with the future use of participants, all the 'routes' will be gone, leaving less incentive for camel to be used, especially for definitions of rest endpoints
- If we decide to go with Spring for the code, then use Spring to define the REST end points, or if we decide to go to policy framework style code, define the REST end points as is defined there
- Use Camel, as is done in CLAMP now.
Database population in CLAMP and Control Loop PoC
- PoC uses persistence.xml files in which you can include the JPA classes defined to generate the ddl using eclipselink. Persistence unit gets defined in the config file, which then refers to the persitence.xml file and looks for the persistence unit name to create the tables of the jpa classes input within it.
persistence.xml example
Code Block title persistence.xml example <persistence xmlns="http://java.sun.com/xml/ns/persistence" version="2.0"> <persistence-unit name="SamplePersistenceUnit" transaction-type="RESOURCE_LOCAL"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <class>org.onap.policy.models.tosca.simple.concepts.JpaToscaDataType</class> ... ... <class>org.onap.policy.clamp.controlloop.models.controlloop.persistence.concepts.JpaClElementStatistics</class> <properties> <property name="eclipselink.ddl-generation" value="create-or-extend-tables" /> <property name="eclipselink.ddl-generation.output-mode" value="database" /> <property name="eclipselink.logging.level" value="INFO" /> </properties> <shared-cache-mode>NONE</shared-cache-mode> </persistence-unit> </persistence>
- CLAMP uses a hibernate maven plugin which generates a ddl from classes within a certain package. (org.onap.policy.clamp). It reads the packages and looks for classes "@Entity" and reads the defined table and column details from them. This plugin then outputs an sql file (create-tables.sql) which gets used in a bootstrap script to create tables along side the database.