...
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
~/oom/kubernetes# kubectl edit cm dev-so-bpmn-infra-app-configmap ## replace "workflow:\n CreateGenericVNFV1:\n" ## with "workflow:\n custom:\n BBS_E2E_Service:\n sdnc:\n need: true\n CreateGenericVNFV1:\n" ## Restart the pod ~/oom/kubernetes# kubectl delete po dev-so-so-bpmn-infra-7556d7f6bc-8fthk |
Info |
---|
Beware: The spaces in the code segment above should be exactly as shown, otherwise SO BPMN infra POD will crash upon bring-up. |
Mapping between resource model and BPMN template: SO : How it works between API and BPMN
...
Code Block | ||||
---|---|---|---|---|
| ||||
## Fetch mariadb root password root@onap-rancher-daily:/home/ubuntu# kubectl get secrets/dev-mariadb-galera-db-root-password --template={{.data.password}} | base64 -d root@onap-rancher-daily:/home/ubuntu# kubectl exec -ti dev-mariadb-galera-0 sh sh-4.2$ mysql -u root -p MariaDB [(none)]> use catalogdb; MariaDB [catalogdb]> INSERT INTO vnf_recipe (NF_ROLE, ACTION, SERVICE_TYPE, VERSION_STR, DESCRIPTION, ORCHESTRATION_URI, VNF_PARAM_XSD, RECIPE_TIMEOUT) VALUES ("InternetProfile", "createInstance", "NF", "1.0", "create InternetProfile", "/mso/async/services/CreateSDNCNetworkResource", '{"operationType":"AccessConnectivityInternetProfile"}', 180000), ("AccessConnectivity", "createInstance", "NF", "1.0", "create AccessConnectivity", "/mso/async/services/CreateSDNCNetworkResource", '{"operationType":"InternetProfileAccessConnectivity"}', 180000), ("CPE", "createInstance", "NF", "1.0", "create CPE", "/mso/async/services/HandlePNF", NULL, 180000); MariaDB [catalogdb]> select * from vnf_recipe where NF_ROLE IN ('AccessConnectivity','InternetProfile', 'CPE'); +-------+--------------------+----------------+--------------+-------------+---------------------------+-----------------------------------------------+----------------------------------------+----------------+---------------------+--------------+ | id | NF_ROLE | ACTION | SERVICE_TYPE | VERSION_STR | DESCRIPTION | ORCHESTRATION_URI | VNF_PARAM_XSD | RECIPE_TIMEOUT | CREATION_TIMESTAMP | VF_MODULE_ID | +-------+--------------------+----------------+--------------+-------------+---------------------------+-----------------------------------------------+----------------------------------------+----------------+---------------------+--------------+ | 10048 | InternetProfile | createInstance | NF | 1.0 | create InternetProfile | /mso/async/services/CreateSDNCNetworkResource | {"operationType":"InternetProfile"} | 1800000 | 2020-01-20 17:43:07 | NULL | | 10051 | AccessConnectivity | createInstance | NF | 1.0 | create AccessConnectivity | /mso/async/services/CreateSDNCNetworkResource | {"operationType":"AccessConnectivity"} | 1800000 | 2020-01-20 17:43:07 | NULL | | 10054 | CPE | createInstance | NF | 1.0 | create CPE | /mso/async/services/HandlePNF | NULL | 1800000 | 2020-01-20 17:43:07 | NULL | +-------+--------------------+----------------+--------------+-------------+---------------------------+-----------------------------------------------+----------------------------------------+----------------+---------------------+--------------+ 3 rows in set (0.00 sec) |
...
Code Block | ||||
---|---|---|---|---|
| ||||
root@onap-nfs:/home/ubuntu# kubectl exec -ti dev-dcae-bootstrap-7599b45c77-czxsx -n onap bash bash-4.2$ cfy install -b ves-mapper -d ves-mapper /blueprints/k8s-ves-mapper.yaml Uploading blueprint /blueprints/k8s-ves-mapper.yaml... k8s-ves-mapper.yaml |#################################################| 100.0% Blueprint uploaded. The blueprint's id is ves-mapper Creating new deployment from blueprint ves-mapper... Deployment created. The deployment's id is ves-mapper Executing workflow install on deployment ves-mapper [timeout=900 seconds] Deployment environment creation is pending... 2020-03-26 13:37:22.808 CFY <ves-mapper> Starting 'create_deployment_environment' workflow execution 2020-03-26 13:37:23.404 CFY <ves-mapper> Installing deployment plugins 2020-03-26 13:37:23.404 CFY <ves-mapper> Sending task 'cloudify_agent.operations.install_plugins' 2020-03-26 13:37:23.404 CFY <ves-mapper> Task started 'cloudify_agent.operations.install_plugins' 2020-03-26 13:37:24.051 LOG <ves-mapper> INFO: Installing plugin: k8s 2020-03-26 13:37:24.051 LOG <ves-mapper> INFO: Using existing installation of managed plugin: c567dae6-35df-426a-a677-45ac51175b73 [package_name: k8splugin, package_version: 1.7.2, supported_platform: linux_x86_64, distribution: centos, distribution_release: core] 2020-03-26 13:37:24.051 CFY <ves-mapper> Task succeeded 'cloudify_agent.operations.install_plugins' 2020-03-26 13:37:24.051 CFY <ves-mapper> Skipping starting deployment policy engine core - no policies defined 2020-03-26 13:37:24.051 CFY <ves-mapper> Creating deployment work directory 2020-03-26 13:37:24.724 CFY <ves-mapper> 'create_deployment_environment' workflow execution succeeded 2020-03-26 13:37:26.733 CFY <ves-mapper> Starting 'install' workflow execution 2020-03-26 13:37:27.363 CFY <ves-mapper> [universalvesadapter_3gckp9] Creating node instance 2020-03-26 13:37:27.363 CFY <ves-mapper> [universalvesadapter_3gckp9.create] Sending task 'k8splugin.create_for_components' 2020-03-26 13:37:29.831 LOG <ves-mapper> [universalvesadapter_3gckp9.create] INFO: Added config for s76305937176b49619b24cb13ae1f5100-dcaegen2-svc-mapper 2020-03-26 13:37:30.335 LOG <ves-mapper> [universalvesadapter_3gckp9.create] INFO: Done setting up: s76305937176b49619b24cb13ae1f5100-dcaegen2-svc-mapper 2020-03-26 13:37:30.986 CFY <ves-mapper> [universalvesadapter_3gckp9.create] Task succeeded 'k8splugin.create_for_components' 2020-03-26 13:37:30.986 CFY <ves-mapper> [universalvesadapter_3gckp9] Node instance created 2020-03-26 13:37:30.986 CFY <ves-mapper> [universalvesadapter_3gckp9] Configuring node instance: nothing to do 2020-03-26 13:37:30.986 CFY <ves-mapper> [universalvesadapter_3gckp9] Starting node instance 2020-03-26 13:37:31.654 CFY <ves-mapper> [universalvesadapter_3gckp9.start] Sending task 'k8splugin.create_and_start_container_for_components' 2020-03-26 13:37:33.026 LOG <ves-mapper> [universalvesadapter_3gckp9.start] INFO: Passing k8sconfig: {'tls': {u'component_cert_dir': u'/opt/dcae/cacert', u'cert_path': u'/opt/app/osaaf', u'image': u'nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.tls-init-container:2.1.0', u'ca_cert_configmap': u'dev-dcae-bootstrap-dcae-cacert', u'component_ca_cert_path': u'/opt/dcae/cacert/cacert.pem'}, 'filebeat': {u'config_map': u'dev-dcae-filebeat-configmap', u'config_path': u'/usr/share/filebeat/filebeat.yml', u'log_path': u'/var/log/onap', u'image': u'docker.elastic.co/beats/filebeat:5.5.0', u'data_path': u'/usr/share/filebeat/data', u'config_subpath': u'filebeat.yml'}, 'consul_dns_name': u'consul-server.onap', 'image_pull_secrets': [u'onap-docker-registry-key'], 'namespace': u'onap', 'consul_host': 'consul-server:8500', 'cbs': {'base_url': 'https://config-binding-service:10443/service_component_all'}, 'default_k8s_location': u'central'} 2020-03-26 13:37:32.522 LOG <ves-mapper> [universalvesadapter_3gckp9.start] INFO: Starting k8s deployment for s76305937176b49619b24cb13ae1f5100-dcaegen2-svc-mapper, image: nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.0.1, env: {'DCAE_CA_CERTPATH': u'/opt/dcae/cacert/cacert.pem', 'CONSUL_HOST': u'consul-server.onap', u'SERVICE_TAGS': u'', 'CBS_CONFIG_URL': 'https://config-binding-service:10443/service_component_all/s76305937176b49619b24cb13ae1f5100-dcaegen2-svc-mapper', 'CONFIG_BINDING_SERVICE': u'config_binding_service'}, kwargs: {'readiness': {}, 'labels': {'cfydeployment': u'ves-mapper', 'cfynodeinstance': u'universalvesadapter_3gckp9', 'cfynode': u'universalvesadapter'}, 'tls_info': {}, 'envs': {u'SERVICE_TAGS': u'', u'CONFIG_BINDING_SERVICE': u'config_binding_service'}, 'liveness': {}, 'resource_config': {}, 'volumes': [], 'log_info': {}, 'ports': [u'80:0'], 'k8s_location': u'central'} 2020-03-26 13:37:33.026 LOG <ves-mapper> [universalvesadapter_3gckp9.start] INFO: k8s deployment initiated successfully for s76305937176b49619b24cb13ae1f5100-dcaegen2-svc-mapper: {'services': ['s76305937176b49619b24cb13ae1f5100-dcaegen2-svc-mapper'], 'namespace': u'onap', 'location': u'central', 'deployment': 'dep-s76305937176b49619b24cb13ae1f5100-dcaegen2-svc-mapper'} 2020-03-26 13:37:33.026 LOG <ves-mapper> [universalvesadapter_3gckp9.start] INFO: Waiting up to 3600 secs for s76305937176b49619b24cb13ae1f5100-dcaegen2-svc-mapper to become ready 2020-03-26 13:37:35.743 LOG <ves-mapper> [universalvesadapter_3gckp9.start] INFO: k8s deployment is ready for: s76305937176b49619b24cb13ae1f5100-dcaegen2-svc-mapper 2020-03-26 13:37:36.390 LOG <ves-mapper> [universalvesadapter_3gckp9.start] INFO: Done starting: s76305937176b49619b24cb13ae1f5100-dcaegen2-svc-mapper 2020-03-26 13:37:36.390 CFY <ves-mapper> [universalvesadapter_3gckp9.start] Task succeeded 'k8splugin.create_and_start_container_for_components' 2020-03-26 13:37:36.390 CFY <ves-mapper> [universalvesadapter_3gckp9] Node instance started 2020-03-26 13:37:36.390 CFY <ves-mapper> 'install' workflow execution succeeded Finished executing workflow install on deployment ves-mapper * Run 'cfy events list -e 69ffeb61-3f09-4311-8483-6f4ab7a20806' to retrieve the execution's events/logs |
VES Collector
Configure mapping VES event domain to DMaaP topic: ves-statechange --> unauthenticated.CPE_AUTHENTICATION
1) Access Consul UI: http://<consul_server_ui>:30270/ui/#/dc1/services
2) Modify dcae-ves-collector configuration by adding a new VES domain to DMaaP topic mappingIn Frankfurt by default VES Collector listens to secured port with HTTPS. To keep our code modifications to a minimum, we can use the HTTP-version of VES-Collector by deploying a separate Cloudify instance from within DCAE bootstrap POD.
Code Block | ||||
---|---|---|---|---|
| ||||
"vesroot@onap-statechange": {"type": "message_router", "dmaap_info": {"topic_url": "http://message-router:3904/events/unauthenticated.CPE_AUTHENTICATION"}} |
3) Click on UPDATE to apply the new configuration
SDNC
Make sure that BBS DGs in SDNC DGBuilder are in Active state
bbs-access-connectivity-vnf-topology-operation-create-huawei
bbs-access-connectivity-vnf-topology-operation-delete-huawei
bbs-internet-profile-vnf-topology-operation-change-huawei
bbs-internet-profile-vnf-topology-operation-common-huawei
bbs-internet-profile-vnf-topology-operation-create-huawei
bbs-internet-profile-vnf-topology-operation-delete-huawei
validate-bbs-vnf-input-parameters
DGBuilder URL: https://dguser:test123@sdnc.api.simpledemo.onap.org:30203
Access SDN M&C DG
Configure Access SDN M&C IP address in SDNC DG using dgbuilder. For instance:
...
nfs:/home/ubuntu# kubectl exec -ti dev-dcae-bootstrap-7599b45c77-czxsx -n onap bash
bash-4.2$ cfy install -b ves-http -d ves-http -i /inputs/k8s-ves-inputs.yaml /blueprints/k8s-ves.yaml |
Configure mapping VES event domain to DMaaP topic: ves-statechange --> unauthenticated.CPE_AUTHENTICATION
1) Access Consul UI: http://<consul_server_ui>:30270/ui/#/dc1/services
2) Modify dcae-ves-collector-http configuration by adding a new VES domain to DMaaP topic mapping
Code Block | ||||
---|---|---|---|---|
| ||||
"ves-statechange": {"type": "message_router", "dmaap_info": {"topic_url": "http://message-router:3904/events/unauthenticated.CPE_AUTHENTICATION"}} |
3) Click on UPDATE to apply the new configuration
SDNC
Make sure that BBS DGs in SDNC DGBuilder are in Active state
bbs-access-connectivity-vnf-topology-operation-create-huawei
...
...
1) Export the relevant DG
2) Modify the IP address
3) Import back the DG and Activate it
...
bbs-access-connectivity-vnf-topology-operation-delete-huawei
...
bbs-internet-profile-vnf-topology-operation-change-huawei
bbs-internet-profile-vnf-topology-operation-common-huawei
bbs-internet-profile-vnf-topology-operation-create-huawei
bbs-internet-profile-vnf-topology-operation-delete-huawei
validate-bbs-vnf-input-parameters
DGBuilder URL: https://dguser:test123@sdnc.api.simpledemo.onap.org:30203
...
Access SDN M&C DG
Configure Edge Access SDN M&C IP address in SDNC DG using dgbuilder. For instance:
> GENERIC-RESOURCE-API: bbs-access-connectivity-vnf-topology-operation-common-create-huawei.json
> GENERIC-RESOURCE-API: bbs-access-connectivity-vnf-topology-operation-delete-huawei.json
1) Export the relevant DG
...
DGBuilder URL: https://dguser:test123@sdnc.api.simpledemo.onap.org:30203
...
Edge SDN M&C
...
DG
Configure Edge SDN M&C IP address in SDNC DG using dgbuilder. For instance:
> GENERIC-RESOURCE-API: bbs-access-connectivity-vnf-topology-operation-common-huawei.json
1) Export the relevant DG
2) Modify the IP address
3) Import back the DG and Activate it
DGBuilder URL: https://dguser:test123@sdnc.api.simpledemo.onap.org:30203
Ref: Swisscom Edge SDN M&C and virtual BNG
Add SSL certificate of the 3rd party controller into the SDNC trust store
Code Block | ||||
---|---|---|---|---|
| ||||
kubectl exec -ti dev-sdnc-sdnc-0 -n onap -- bash
openssl s_client -connect <IP_ADDRESS_EXT_CTRL>:<PORT>
# copy server certificate and paste in /tmp/<CA_CERT_NAME>.crt
sudo keytool -importcert -file /tmp/<CA_CERT_NAME>.crt -alias <CA_CERT_NAME>_key -keystore truststore.onap.client.jks -storepass adminadmin
keytool -list -keystore truststore.onap.client.jks -storepass adminadmin | grep <CA_CERT_NAME> |
See
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Policy
...
exec -ti dev-sdnc-0 -n onap -- bash
openssl s_client -connect <IP_ADDRESS_EXT_CTRL>:<PORT>
# copy server certificate and paste in /tmp/<CA_CERT_NAME>.crt
sudo keytool -importcert -file /tmp/<CA_CERT_NAME>.crt -alias <CA_CERT_NAME>_key -keystore truststore.onap.client.jks -storepass adminadmin
keytool -list -keystore truststore.onap.client.jks -storepass adminadmin | grep <CA_CERT_NAME> |
See
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Policy
Deploy BBS APEX Policy (master, apex-pdp image v2.3+)
Before Starting check POLICY-PAP and POLICY-API are exposed correctly.
Code Block | ||
---|---|---|
| ||
kubectl get svc | grep -i policy
policy-apex-pdp ClusterIP 10.43.154.141 <none> 6969/TCP 9d
policy-api ClusterIP 10.43.38.189 <none> 6969/TCP 9d
policy-distribution ClusterIP 10.43.59.17 <none> 6969/TCP 9d
policy-handler ClusterIP 10.43.26.210 <none> 80/TCP 9d
policy-mariadb ClusterIP None <none> 3306/TCP 9d
policy-pap ClusterIP 10.43.32.178 <none> 6969/TCP 9d
policy-xacml-pdp ClusterIP 10.43.45.35 <none> 6969/TCP 9d
|
If it's already exposed you will see Nodeport instead of ClusterIp.
Expose policy:
Code Block | ||
---|---|---|
| ||
kubectl edit svc policy-api
//change spec type to NodePort
spec:
clusterIP: {IP}
ports:
- name: policy-pap
port: 6969
protocol: TCP
targetPort: 6969
selector:
app: pap
release: frankfurt
sessionAffinity: None
type: NodePort
kubectl edit svc policy-pap |
Code Block | ||
---|---|---|
| ||
kubectl get svc | grep policy
policy-apex-pdp ClusterIP 10.43.29.86 <none> 6969/TCP 9d
policy-api NodePort 10.43.197.94 <none> 6969:30687/TCP 9d
policy-distribution ClusterIP 10.43.129.175 <none> 6969/TCP 9d
policy-handler ClusterIP 10.43.149.5 <none> 80/TCP 9d
policy-mariadb ClusterIP None <none> 3306/TCP 9d
policy-pap NodePort 10.43.230.71 <none> 6969:31620/TCP 9d
policy-xacml-pdp ClusterIP 10.43.104.92 <none> 6969/TCP 9d
|
Postman Collection: BBS APEX Policy API Frankfurt.postman_collection.json.zip
1) Make Sure APEX PDP is running and in Active state
...
Simplified E2E Service Model (Frankfurt):view-file
Multimedia | ||||
---|---|---|---|---|
|
BSS HSIA Service Order: Request Input
...