Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

In the manual configuration sections, for creating the optimization policies, the following set of policies needs to be used, other configurations can be found at Manual Configuration for 5G Network Slicing and the operation guidance for option2 can be found at Operation Guidance for Option2

Policy Creation Steps

Refer Optimization Policy Creation Steps for optimization policy creation and deployment steps

Copy the policy files

unzip policies_option1_Istanbul.zip

kubectl cp policies -n onap <oof-pod-name>:/opt/osdf

kubectl exec -ti -n onap <oof-pod-name> bash

cd policies/

python3 policy_utils.py create_policy_types policy_types

python3 policy_utils.py create_and_push_policies nst_policies

python3 policy_utils.py generate_nsi_policies NSTO2

python3 policy_utils.py create_and_push_policies gen_nsi_policies

cd policies/

python3 policy_utils.py generate_nssi_policies EmbbAn_NF minimize latency

python3 policy_utils.py create_and_push_policies gen_nssi_policies

Refer Policy Models and Sample policies - NSI selection for sample policies 

Updated slice/service profile mapping - https://gerrit.onap.org/r/gitweb?p=optf/osdf.git;a=blob;f=config/slicing_config.yaml;h=179f54a6df150a62afdd72938c2f33d9ae1bd202;hb=HEAD

NOTE:

  • The service name given for creating the policy must match with the service name in the request
  • The scope fields in the policies should match with the value in the resourceSharingLevel(non-shared/shared). Do modify the policy accordingly.
  • Check the case of the attributes with the OOF request with the attribute map (camel to snake and snake to camel) in config/slicing_config.yaml, if any mismatch found modify the attribute map accordingly.
  • You need to restart the OOF docker container once you updated the slicing_config.yaml, you can do it using the following steps,

    • Login to the worker VM where the OOF container is running. You can find the worker node by running (kubectl get pods -n onap -o wide | grep dev-oof)
    • Find the container using docker ps | grep optf-osdf
    • Restart the container using docker restart <container id>
  • No labels