Introduction/Background:
As part of Core NF Simulators setup, here is the link which gives the design time insights of Core NF Simulators. (Ref: Core-NF-Simulator).
Prerequisites:
VES Collector:
Core NF Simulator uses VES-evelJavaLibrary to send FileReady notification event to VES-Collector. VES-agent communicates to the collector by HTTP only, So in your ONAP environment, one has to enable HTTP connection for VES-collector. As of Istanbul, VES collector is enabled and if not in your environment, you can enable it either by followint the blueprint method by running the mentioned as below(reference link https://docs.onap.org/projects/onap-dcaegen2/en/frankfurt/sections/services/ves-http/installation.html)
...
, or the HELM method which is the go forward method as of Istanbul.
- Execute into Bootstrap POD using kubectl command, then VES blueprint is available under /blueprints directory
k8s-ves.yaml
. A corresponding input file is also pre-loaded into the bootstrap pod under /inputs/k8s-ves-inputs.yaml - Deploy blueprint
- $ >> cfy install -b ves-http -d ves-http -i /inputs/k8s-ves-inputs.yaml /blueprints/k8s-ves.yaml
- Execute into Bootstrap POD using kubectl command, then VES blueprint is available under /blueprints directory
Fixing the upf simulator:
The upf simulator in the docker registry for lttscoresimulator does not work. The amf and smf functions were updated to version 1.5, upf remained at 1.3 and was not updated, and does not work "out of the box". However the version available can be pulled to a local host, edited, and served locally using docker registry, and tagged as version 1.5.
Edits to the upf-simulator container image:
- Use docker pull to grab a copy of the upf-simulator version 1.3
- Use docker save -o ~/Downloads/upf-siumlaotr_v1.3.tgz lttscoresimulators/upf_simulator:v1.3
- Make these edits to the code:
- AT the top level there is a repositories file. You will need to add the path to your repository. This is the localhost tag, and the long string that appears after you launch your Docker registry. For example, {"localhost:5000/lttscoresimulators/upf_simulator":{"v1.5":"ef0368aae6b2b32910ff97cb0095b4b98463c8f444a2abf1ba2c6c9c27a11692"}}
- There is an archive in the image file that's ~75kb in size, and should contain a Code, and Config folder. Several files need to be edited.
- In the application-config.py file change line 38 from self.config_file = '/etc/config/supportedNssai.json to self.config_file = '/etc/config/upf-conf/supportedNssai.json
- The file is missing a path to the granularity period configuration. Add this text after the dcae_collector_port_file_path, self.granularity_period_file_path = '/etc/config/upf-conf/granularity_period.txt'
- In the sftp_config.sh file, comment out with a "#" everything below "service ssh restart"
- Once you have your docker image file updated and ready to be published in your local registry, use the docker load command to load it into your local image repository. Do not use the docker import command.
- Tag it as per the docker registry instructions as pre-ceded with localhost:5000, version 1.5, then push it push it using docker push, delete your local image using docker rm, then pull it using docker pull to make sure it work and the image is in your repository and thus in the registry.,
Edit the upf Helm Chart from the downloaded versions below:
In the templates folder, deployments.yaml file, change the mount path for the config-volume in the spec section from /etc/config/ to /etc/config/upf-conf. This will align the Helm based deployment to the specifications in the code above for application configurations and ensure they are placed in the correct location on your pod.
In the same folder, the configmap.yaml file should have the json mapping for the supported snssai at the bottom the JSON code in this code block. This also needs to be done for the amf, and smf configmap.yaml charts as well.
Code Block | ||
---|---|---|
| ||
{
"s-nssai": "{{ .Values.config.supportedNssai.sNssai.snssai }}",
"status": "{{ .Values.config.supportedNssai.sNssai.status }}"
} |
In the values.yaml file, the "Image section needs to reflect that you are now locally hosting the code. Example in the code block below;
Code Block | ||
---|---|---|
| ||
image:
repository: localhost:5000/lttscoresimulators/upf_simulator
tag: v1.5
pullPolicy: IfNotPresent |
Deployment Options:
Core NF Simulators (AMF/SMF/UPF) can be deployed in two ways,
...
Note:- No much difference in both the ways when once come comes to the simulator implementation, basically, the 5GC instances (5GC Instantiation and Modify Config flow through CDS ) doesndon't contain the slice profile information in the CBA package and some config information about DCAE. In the second approach, you had an updated CBA with slice-profile information and required DCAE config data for simulators in the second way of deployment.
Deployment Steps for Option1:
- Download this Helm Package for core simulators deployments instead of helm charts provided in this 5GC Instantiation and Modify Config flow through CDS instantiation. (Updated the helm charts with simulator docker images: lttscoresimulators/amf_simulatorssimulator:v1.15, lttscoresimulators/smf_simulatorssimulator:v1.15,lttscoresimulators/upf_simulatorssimulator:v1.15) . After downloading Helm charts, update your DCAE-VES Collector IP, Port & required granularity_period (granularity period value should be in seconds) under values.yaml in all the helm charts of AMF, SMF & UPF.
View file | ||||
---|---|---|---|---|
|
- Use the same CBA package provided in the 5GC instantiation flow and continue the design and deployment steps for core-simulators instantiations as CNF's. Then navigate to https://github.com/onap/ccsdk-cds/tree/master/components/model-catalog/blueprint-model/service-blueprint/5GC_Simulator_CNF_CDS and download the updated kotlin scripts and vtl files as the ones from the CBA package no longer work in the Istanbul release..
- AMF/SMF/UPF Simulators will start simulating the PM Data for all the active S-NSSAI's and sends the fileReady notification to VES-Collector based on the granularity period.
- PM files of respective simulators are saved under the sftp SFTP user directory. (i.e. /data/admin/pm_directory/ )
Deployment Steps for Option2:
- TBD: Here with the above helm charts, we will provide you a modified CBA package which have has the slice profile information also as part of it, based on that slice profile information simulators will generate the PM data.
Configurations/Troubleshooting:
- Can update the DCAE Collector IP, Port & granularity period any time by logging into the respective CNF simulator pods under /etc/config/ directory
- If the simulators pods are still going into Crashedloop means, the required config files are not updated (i.e. supportedNssai.json, dcae_collector_ip.txt, dcae_collector_port.txt, granularity_period.txt ) properly. So one can test those things by changing the image tag name from v1.5 to test, So that you can enter into the pod and can check the config files. If your config files are proper the simulator application runs without any Crashedloop issue.
Attached Sample Files for AMF/SMF/UPF PM Data:
...