1. UUI Configuration
Configure CST template UUID and Invariant UUID in slicing.properties file of uui-server microservice
...
Copy subnetCapability.json to SO-API Handler pod to configure subnet capabilities at run time.You can copy the file to the pod using the following command
...
kubectl
cp
subnetCapability.json -n onap <so-apih-pod-name>:
/app
SO Database Update
Insert ORCHESTRATION_URI into service_recipe, SERVICE_MODEL_UUID replaced by CST.ModelId.
...
Sample subnetCapability.json
{
"AN_NF": {
"latency": 5,
"maxNumberofUEs": 200,
"maxThroughput": 90,
"termDensity": 40
},
"AN": {
"latency": 20,
"maxNumberofUEs": 100,
"maxThroughput": 150,
"termDensity": 50
},
"CN": {
"latency": 10,
"maxThroughput": 50,
"maxNumberofConns": 100
},
"TN_FH": {
"latency": 10,
"maxThroughput": 90
},
"TN_MH": {
"latency": 5,
"maxThroughput": 90
},
"TN_BH": {
"latency": 10,
"maxThroughput": 100
}
}
You can copy the file to the pod using the following command
|
SO Database Update
Insert ORCHESTRATION_URI into service_recipe, SERVICE_MODEL_UUID replaced by CST.ModelId.
|
|
|
|
|
|
|
|
...
|
|
|
|
|
|
|
|
|
|
|
INSERT INTO `catalogdb`.`service_
Insert ORCHESTRATION_URI into service_recipe, SERVICE_MODEL_UUID is ServiceProfile.ModelId
|
|
|
|
|
|
|
|
kubectl cp policies -n onap <oof-pod-name>:/opt/osdf
kubectl exec -ti -n onap <oof-pod-name> bash
cd policies/
python3 policy_utils.py create_policy_types policy_types
python3 policy_utils.py create_and_push_policies nst_policies
python3 policy_utils.py generate_nsi_policies NSTO2
|
5. OOF Configuration - Policy Creation Steps
Refer Optimization Policy Creation Steps for optimization policy creation and deployment steps
View file | ||||
---|---|---|---|---|
|
Copy the policy files
|
5. OOF Configuration - Policy Creation Steps
Refer Optimization Policy Creation Steps for optimization policy creation and deployment steps
Please find the policies for Option2 below, generate the policies outside the OOF pod and push the policies from inside OOF pod, since it has python3 and necessary libraries already installed
View file | ||||
---|---|---|---|---|
|
Copy the policy files
unzip policies_option2.zip |
policy_ |
types |
cd policies/
policy_types |
create_and_push_policies nst_policies |
...
|
NOTE:
For NST Selection based on latency constraint, please make sure you have updated the latency constraint as property in the design time template of NST as below,
Refer Policy Models and Sample policies - NSI selection for sample policies
Updated slice/service profile mapping - slicing_config.yaml.yaml
HAS-API/HAS-DATA - Add data dictionary
Go to (/opt/has/conductor/conductor/data/plugins/inventory_provider/candidates/slice_profiles_candidate.py) in OOF HAS pod
update the following :
"max_bandwidth": copy_first,
"jitter": sum,
"sst": copy_first,
"latency": sum,
"resource_sharing_level": copy_first,
"s_nssai": copy_first,
"s_nssai_list": copy_first,
"plmn_id_list": copy_first,
"plmn_id_List": copy_first,
"availability": copy_first,
"throughput": min,
"reliability": copy_first,
"max_number_of_ues": copy_first,
"exp_data_rate_ul": copy_first,
"exp_data_rate_dl": copy_first,
"ue_mobility_level": copy_first,
"activity_factor": copy_first,
"survival_time": copy_first,
"max_number_of_conns": copy_first,
"coverage_area_ta_list": copy_first,
"max_number_of_pdu_session": copy_first,
"max_throughput": copy_first,
"perf_req": copy_first,
"terminal_density": copy_first
update those and restart the container
NOTE:
- The service name given for creating the policy must match with the service name in the request
- The scope fields in the policies should match with the value in the resourceSharingLevel(non-shared/shared). Do modify the policy accordingly.
- Check the case of the attributes with the OOF request with the attribute map (camel to snake and snake to camel) in config/slicing_config.yaml, if any mismatch found modify the attribute map accordingly.
You need to restart the OOF docker container once you updated the slicing_config.yaml, you can do it using the following steps,
- Login to the worker VM where the OOF container is running. You can find the worker node by running (kubectl get pods -n onap -o wide | grep dev-oof)
- Find the container using docker ps | grep optf-osdf
- Restart the container using docker restart <container id>
6. External RAN NSSMF Simulator
Deployment Guide for External RAN NSSMF Simulator
1. Download:
...
git clone https://gerrit.onap.org/r/integration
cd integration/test/mocks/ran-nssmf-simulator
2. Environment Setup (Optional):
1) The default listening port of RESTful API is 8443, and you can set environment variable RAN_NSSMF_REST_PORT to change it, such as:
...
2) The default username and password are in RanNssmfSimulator/etc/auth.json, and you can edit the file to change them or add new ones.
...
There are two options to run the simulator:
Option 1. Directly run it in the current directory:
...
pip3 install -r requirements.txt
python3 main.py
Option 2. Install it using setuptools, and run it in any directory:
...
python3 setup.py install --user
python3 -m RanNssmfSimulator.MainApp
Register to ONAP ESR
1. Add an esr-thirdparty-sdnc to ESR:
...
Run command:
...
Example of sdnc-an-01.json:
...
{
"thirdparty-sdnc-id": "sdnc-an-01"
}
...
- pods -n onap -o wide | grep dev-oof)
- Find the container using docker ps | grep optf-osdf
- Restart the container using docker restart <container id>
CPS Configuration:
Refer CPS Configuration to setup standalone CPS and configuration of OOF and CPS.
6. External RAN NSSMF Simulator
Deployment Guide for External RAN NSSMF Simulator
1. Download:
git clone https://gerrit.onap.org/r/integration cd integration/test/mocks/ran-nssmf-simulator |
2. Environment Setup (Optional):
1) The default listening port of RESTful API is 8443, and you can set environment variable RAN_NSSMF_REST_PORT to change it, such as:
export RAN_NSSMF_REST_PORT=18443 |
2) The default username and password are in RanNssmfSimulator/etc/auth.json, and you can edit the file to change them or add new ones.
3. Install and Run:
There are two options to run the simulator:
Option 1. Directly run it in the current directory:
pip3 install -r requirements.txt python3 main.py |
Option 2. Install it using setuptools, and run it in any directory:
python3 setup.py install --user python3 -m RanNssmfSimulator.MainApp |
Register to ONAP ESR
Add an esr-thirdparty-sdnc and esr-system-info to ESR:
Run command: |
---|
curl |
-k -X PUT "https://AAI:AAI@<worker-vm-ip>:30233/aai/v23/external-system/esr-thirdparty-sdnc-list/esr-thirdparty-sdnc/sdnc-an-01" \ -H 'Accept: application/json' \ -H 'X-FromAppId: AAI' \ -H 'X-TransactionId: 1' \ -H 'Content-Type: application/json |
' \ -d '{ "thirdparty-sdnc-id":"sdnc-an-01 |
", "product-name": "nssmf", "esr-system-info-list |
":{"esr-system-info |
Example of nssmf-an-01.json:
":[{ "esr-system-info-id": "nssmf-an-01", |
" |
system-name": " |
E2E", |
"vendor": "huawei", |
" |
type": |
"an", "user-name": "admin", |
" |
password": " |
123456", |
" |
system-type": " |
thirdparty-sdnc", |
" |
ip- |
address": " |
<ip-address-of-simulator>", |
" |
port": " |
8443", "ssl-cacert": "test.ca" }]} }' |
Where, ip-address is the IP address or hostname which runs the External RAN NSSMF Simulator, port is the listening port of RESTful API of the simulator,
user-name and password are set in config file RanNssmfSimulator/etc/auth.json of the simulator.
...
There are two ways to run Core NSSMF simulator. One is to start via jar package(External Core NSSMF Simulator Use Guide) and the other is to start via docker-compose.
Start by docker-compose
This is the package:
View file | ||||
---|---|---|---|---|
|
1. Extract the downloaded cn-nssmf-simulator-docker-compose.tar.gz
tar xf cn-nssmf-simulator-via docker-compose.tar.gz -C .
cd cn-nssmf-simulator-docker-compose
2. modify application.properties
# vi application.properties
server.port=11111
notifyurl=http://192.168.235.25:30472/v1/pm/notification
ftppath=sftp://root:oom@192.168.235.25:22/home/ubuntu/dcae/PM.tar.gz
fixeddelay=900000
#Configure the output files generated in docker.
filepath=/app/dcae
amffilepath=/app/dcae/AMF.xml.gz
upffilepath=/app/dcae/UPF.xml.gz
3. modify docker-compose.yml
# vi docker-compose.yml
version: '3'
services:
cn-simulator-docker-compose:
image: openjdk:8-jre-slim
container_name: cn-simulator-test-1
ports:
- "11111:11111"
restart: always
# mount the cn-nssmf-simulator-docker-compose directory of the host machine to the /app directory of the container
# If you need to modify the simulator's configuration file application.properties later,
# you can directly modify the host's cn-nssmf-simulator-docker-compose/application.properties to synchronize to the container
volumes:
- ./:/app
working_dir: /app
entrypoint: java -jar simulator-0.0.1-SNAPSHOT.jar
...
Register to ONAP ESR
1. Add an esr-thirdparty-sdnc to ESR:
...
Run command:
...
Example of sdnc-an-01.json:
...
{
"thirdparty-sdnc-id": "sdnc-cn-01"
}
2. Add an esr-system-info (RAN NSSMF) to ESR:
...
Run command:
...
Example of nssmf-cn-01.json:
{
Start by docker-compose
This is the package:
View file | ||||
---|---|---|---|---|
|
1. Extract the downloaded cn-nssmf-simulator-docker-compose.tar.gz
tar xf cn-nssmf-simulator-docker-compose.tar.gz -C .
cd cn-nssmf-simulator-docker-compose
2. modify application.properties
# vi application.properties
server.port=11111
notifyurl=http://192.168.235.25:30472/v1/pm/notification
ftppath=sftp://root:oom@192.168.235.25:22/home/ubuntu/dcae/PM.tar.gz
fixeddelay=900000
#Configure the output files generated in docker.
filepath=/app/dcae
amffilepath=/app/dcae/AMF.xml.gz
upffilepath=/app/dcae/UPF.xml.gz
3. modify docker-compose.yml
# vi docker-compose.yml
version: '3'
services:
cn-simulator-docker-compose:
image: openjdk:8-jre-slim
container_name: cn-simulator-test-1
ports:
- "11111:11111"
restart: always
# mount the cn-nssmf-simulator-docker-compose directory of the host machine to the /app directory of the container
# If you need to modify the simulator's configuration file application.properties later,
# you can directly modify the host's cn-nssmf-simulator-docker-compose/application.properties to synchronize to the container
volumes:
- ./:/app
working_dir: /app
entrypoint: java -jar simulator-0.0.1-SNAPSHOT.jar
4. start up the application by running " docker-compose up"
Register to ONAP ESR
Add an esr-thirdparty-sdnc and esr-system-info to ESR:
Run command: |
---|
curl -k -X PUT "https://AAI:AAI@<worker-vm-ip>:30233/aai/v23/external-system/esr-thirdparty-sdnc-list/esr-thirdparty-sdnc/nssmf-cn" \ -H 'Accept: application/json' \ -H 'X-FromAppId: AAI' \ -H 'X-TransactionId: 1' \ -H 'Content-Type: application/json' \ -d '{ "thirdparty-sdnc-id":"nssmf-cn", "product-name": "nssmf", "esr-system-info-list":{"esr-system-info":[{ "esr-system-info-id": "nssmf-cn-01", "system-name": "E2E", "vendor": " |
HUAWEI", "type": " |
cn", "service-url": "", "user-name": "", "password": "", "system-type": "thirdparty_SDNC", |
"ip-address": "<ip-address-of-simulator>", "port": "11111" }]} }' |
Where, ip-address is the IP address or hostname which runs the External RAN NSSMF Simulator, port is the listening port of RESTful API of the simulator.
9. ACTN Simulator
Refer ACTN Simulator User Guide to setup ACTN-simulator or follow the below steps to launch and initialize domain controllers,
Refer Transport Slicing Configuration and Operation Guidance for NextHop details of AN &CN
Step 1. Fetch the simulator docker image
First, pull down the simulator image from public docker hub.
|
Step 2. Start the simulator container(s)
After compiling the pnc-simulator image locally or fetching the docker image from remote registry, execute the command below to start the container:
|
|
For demonstration purposes, two domain controllers are required, in our case we use 18181 as SERVER_PORT for domain1 and 18182 for domain2.
After starting the container, you should be able to verify the running container by:
|
Step 3. Initialize the simulator(s)
...
To initialize the simulator for each domain:
|
For demonstration purposes, two domain controllers are required, both need to be properly launched and initialized. The initialization payloads are attached below.
View file | ||||
---|---|---|---|---|
|
View file | ||
---|---|---|
|
|
ESR Registration using AAI ESR Url:
...
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
curl -k -X PUT "https://AAI:AAI@<worker-vmIp>:30233/aai/v23/external-system/esr-thirdparty-sdnc-list/esr-thirdparty-sdnc/ff9ef162-951d-4e14-9ce6-b4fa0adf896b" \ -H 'Accept: application/json' \ -H 'X-FromAppId: AAI' \ -H 'X-TransactionId: 1' \ -H 'Content-Type: application/json' \ -d '{ "thirdparty-sdnc-id":"ff9ef162-951d-4e14-9ce6-b4fa0adf896b", "location": "edge", "product-name": "TSDN", "esr-system-info-list":{"esr-system-info":[{ "esr-system-info-id": "7c29b9df-feef-4fa7-b56d-3e39f5ef4a90", "system-name": "sdnc2", "vendor": "HUAWEI", "type": "WAN", "version": "v1.0", "service-url": "http://<simulator-ip>:<simulator-port>", "user-name": "onos", "password": "rocks", "system-type": "thirdparty_SDNC", "protocol": "restconf" }]} }' |
...