Follow the steps below to setup the CPS environment.
Checkout the project
Checkout https://gerrit.onap.org/r/admin/repos/cps
Building the project
When building the project run from the root cps folder :
mvn clean install
From docker-compose folder run the following after building the images locally :
VERSION=latest DB_USERNAME=cps DB_PASSWORD=cps docker-compose up -d
This starts both cps and postgres containers.
Note: Checkout the README.md in docker-compose folder for detailed steps.
Setup schema in DB
Liquibase auto creates the schema on startup.
Set environment variables with relevant connection details which can be found in application.yml in cps-application/resources folder.
Running the project
This option is if you have a local PostgreSQL running.
From the cps folder run the following command :
java -DDB_HOST=localhost -DDB_USERNAME=cps -DDB_PASSWORD=cps -jar cps-application/target/cps-application-x.y.z-SNAPSHOT.jar
NB. On Linux use IP address of a container instead of localhost
OR
From the cps\cps-application folder run the following command:
mvn spring-boot:run
Running CPS via Helm charts on Minikube :
WSL Checks (when using WSL2 on MS Windows)
Check that your WSL 2 environment is running both linux distribution and docker using a windows command prompt/shell
*It might be needed to configure for Windows is configured for WSL 2 and WSL itself is set to use your linux distribution as default.
$ wsl -l -v NAME STATE VERSION * Ubuntu-20.04 Running 2 docker-desktop Running 2 docker-desktop-data Running 2
When using WSL 2ensure to open a WSL shell window ie. Command Prompt, wsl ...
Install MiniKube
Install and start MiniKube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikube minikube start
Install Kubectl and Helm and Helm Repo
To setup kubectl and helm for ONAP locally follow steps as outlined in the deploy section on - https://docs.onap.org/projects/onap-oom/en/latest/oom_user_guide.html#deploy
Please note the following amendments to the above instructions:
- Follow https://v1-18.docs.kubernetes.io/docs/tasks/tools/install-kubectl/ to install the latest version of kubectl instead of the very old 1.15.11.
- No Need to 'Paste kubectl config rom Rancher' (not sure why that is even in those instructions)
- Skip 'helm install ons/onap' as the document mentions it is not available anymore
Install helm push plugin (before building the Helm repository)
helm plugin install https://github.com/chartmuseum/helm-push.git
After following the steps above ensure your local repo has the charts loaded onto it :
helm search repo local NAME CHART VERSION APP VERSION DESCRIPTION local/a1policymanagement 8.0.0 1.0.0 A Helm chart for A1 Policy Management Service local/aaf 8.0.0 ONAP Application Authorization Framework local/aai 8.0.0 ONAP Active and Available Inventory local/appc 8.0.0 Application Controller ... local/contrib 8.0.0 ONAP optional tools local/cps 8.0.0 Configuration Persistance Service (CPS)
Deploy CPS
To install CPS only, run the following command from within the oom/kubernetes/cps folder
cd <your git repo>/oom/kubernetes/cps helm upgrade dev1 local/cps -i -f values.yaml --set global.masterPassword=mysecr
Once you chart is deployed, we can test it by hitting the spring actuator endpoint from a pod:
kubectl run -it network-multitool-$USER --image=praqma/network-multitool --restart=Never --rm -- bash curl -X GET "http://cps:8080/manage/health" -H "accept: application/json" -H "Content-Type: application/json"
Note. This was tested on windows using WSL2 with Ubuntu 20.04 but any similar environment should suffice.
Setting up SDNC, RAN-sim controller and Honeycomb simulator locally:
SDNC setup
To setup SDNC, firstly download these 2 files:
Unzip certs.tar to the same folder as where you put the downloaded, docker-compose.yml file.
From the same folder as above, run the following command to setup SDNC.
docker-compose up -d
SDNC should be up, when this command has ran successfully.
RAN-sim controller setup
To set up RAN-sim controller follow the steps provided in this page RAN-Sim setup or use the steps below.
- Clone and Checkout Ran-Sim Controllergit clone "https://gerrit.onap.org/r/integration/simulators/ran-simulator"
Pull the pre-built docker image using the command
Docker pull command for ransim controllerdocker pull docker.io/shsubedi/ransimcontroller:v1
Use the following command to tag the image
Docker tag commanddocker tag shsubedi/ransimcontroller:v1 onap/ransim:1.0.0-SNAPSHOT
- Navigate to '<YOUR_DIRECTORY>/ran-simulator/ransim/docker' directory
Modify the docker-compose.yml file, update the SDNR_IP and SDNR_PORT
To get the SDNR_IP, run the following commandInspect SDNC ipdocker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <SDNC_CONTAINER_ID>
SDNR_IP=<SDNC_IP>
SDNR_PORT=8282- Run the 'docker-compose up -d' command from the '<YOUR_DIRECTORY>/ran-simulator/ransim/docker' directory
ransim and mariadb should come up, when this command has ran successfully.
Honeycomb simulator setup
To set up the Honeycomb simulator, follow the steps below or the steps in this page Core & RAN Simulators.
Pull the custom honeycomb docker image using the command
Docker pull commanddocker pull docker.io/tragait/gnbsim:v1
- Clone/download https://github.com/onap-oof-pci-poc/ran-sim
Update ransim and honeycom IP address at '<YOUR_DIRECTORY>/ran-sim/hcsim-content/gnbsim/hc/config/gnbsim.json'
Make sure the following are updated to.
To get the ransimIp and hcIp do the following:Inspect ransim ipdocker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <ransim_CONTAINER_ID>
- "ransimIp": <ransimIP>
- "ransimPort": 8081
- "hcIp": <ransimIP>
- "hcPort": 2831
- Update the image name in the '<YOUR_DIRECTORY>/ran-sim/hcsim-content/gnbsim/hc/docker-compose.yml' to:
- image: tragait/gnbsim:v1
- image: tragait/gnbsim:v1
Run the below command from '<YOUR_DIRECTORY>/ran-sim/hcsim-content/gnbsim/hc' directory
Docker compose up commanddocker-compose up -d
While running the docker-compose up -d command, these servers will be mounted in SDNC
In case these servers are not mounted in SDNC, you can use the following curl command to mount the HC sim.
To get the ip of the hc sim do the following:Inspect hc ipdocker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <hc_CONTAINER_ID>
Make sure to modify the below curl command, replace HC_SIM_IP with the ip retrieved from previous command.
Note. If using WSL 2 then the HC_SIM_IP in the below curl command can be replaced with ip address got from doing :wsl hostname -I
in the windows powershell.HC sim mount commandcurl -i -X PUT http://localhost:8282/restconf/config/network-topology:network-topology/topology/topology-netconf/node/hc -k -H 'Accept: application/xml' -H 'Content-Type: text/xml' --user "admin":"Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U" -d '<node xmlns="urn:TBD:params:xml:ns:yang:network-topology"> <node-id>hc</node-id> <host xmlns="urn:opendaylight:netconf-node-topology">HC_SIM_IP</host> <port xmlns="urn:opendaylight:netconf-node-topology">2831</port> <username xmlns="urn:opendaylight:netconf-node-topology">admin</username> <password xmlns="urn:opendaylight:netconf-node-topology">admin</password> <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only> <!-- non-mandatory fields with default values, you can safely remove these if you do not wish to override any of these values--> <reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema> <connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis> <max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts> <between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis> <sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.5</sleep-factor> <!-- keepalive-delay set to 0 turns off keepalives--> <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">120</keepalive-delay></node>'
- Check using 'docker container ls' that the honeycomb simulator is up and running.
Once the above steps have been completed, check if the honeycomb simulator has been mounted in SDNC by going to the following link and clicking on the Mounted Resources section:
http://localhost:8282/apidoc/explorer/index.html
Note. If using WSL 2 then the localhost can be replaced with ip address got from doing :
wsl hostname -I in the windows powershell.
- Credentials : - admin / Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U
Setting up SDNC, CPS & NCMP, DMI-Plugin and netconf-pnp-simulator locally:
Download the following zip file and extract it.
- File to be downloaded : docker.zip
Navigate to the folder where the files were extracted and run the below command from '<sim/>' directory
docker-compose up -d
Then navigate to the folder where the files were extracted and run the below command.
docker network create test_network
Then run the following command.
docker-compose up -d
Check using 'docker container ls' that SDNC, CPS&NCMP, DMI-Plugin and netconf-pnp-simulator are up and running.
Running CSIT tests locally within CPS
If using Windows, first install WSL using the following command within Powershell as an administrator and restart your machine (If using a Linux environment, these steps can be skipped)
wsl --install
Enable WSL 2 and run the following command within Powershell as an administrator and restart your machine.
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
Download and install the Linux kernel update package
Set WSL 2 as the default Linux distribution
wsl --set-default-version 2
Install Ubuntu 20.04 for Windows from the Microsoft store
Once this is launched, it will then install the Ubuntu Machine, this may take a few minutes.
You will need to set your Unix Username and Password.
- Remember your password, you will need it for SUDO commands!
Next step is to setup docker desktop with WSL2
Download and install docker desktop at
Once docker desktop is installed go to Settings>General and check 'Use the WSL 2 based engine'. Click apply and restart
Once restated, go to Settings>Resources>WSL Integration and check 'Enable Integration with my default WSL distro' and enable your integration
Click Apply & Restart
From your Linux/WSL terminal
Update package index
sudo apt update
Install Python3
sudo apt install python3
Install pip
sudo apt install python3-pip
Upgrade pip
sudo pip3 install --upgrade pip
Install Robot Framework
pip3 install robotframework
Install git
sudo apt install git
Install maven
sudo apt install maven
Clone CPS and Dmi Plugin repos to home directory
Note - You will need to set up your ssh key as outlined here Setting Up Your Development Environment
Copy the settings.xml file from the oparent repo to your .m2 folder located within your home directory
Note - This folder will be hidden, but it should exist once you have maven installed!
Verify the docker daemon is accepting connections.
docker ps
If you see this error
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permission denied
Issue the following command
sudo chmod 666 /var/run/docker.sock
Do a mvn clean install on both directories to pull down the necessary libraries from the POM.
Navigate to csit directory.
Run the following script
sudo bash run-project-csit.sh
Note - The first time this runs it will download all the libraries defined in the CSTI scripts. This may take awhile be patient
Once the scripts have run, the output should look like the following
As part of this process docker containers are created for cps-and-ncmp, dbpostgresql, ncmp-dmi-plugin, mariadb and sdnc, once the testing is finished these docker containers are stopped and removed.
To prevent these docker containers from being stopped as part of this process for any reason, within the teardown.sh script located in cps/csit/plans/cps comment out the following line.
Potential issues
This issue may also appear when running the scripts from a windows WSL environment
Error response from daemon: invalid IP address in add-host: ""
To resolve this, do the following:
Issue the following command
sudo apt install net-tools
ifconfig
From the eth0 configuration take the inet address
And manually add this address to the LOCAL_IP variable within the setup.sh script located in the following directory cps/csit/plans/cps
Save this configuration.
Run scripts again.
If your tests don't run after doing this, as seen above check for this error.
/tmp/tmp.rgIeMxiRCGrobot_venv/bin/python: Error while finding module specification for 'robot.run' (ModuleNotFoundError: No module named 'robot')
In the file run-csit.sh located within the cps/csit directory
Look for the following line.
python -m robot.run -N ${TESTPLAN} -v WORKSPACE:/tmp ${ROBOT_VARIABLES} ${TESTOPTIONS} ${SUITES}
Change this to
python3 -m robot.run -N ${TESTPLAN} -v WORKSPACE:/tmp ${ROBOT_VARIABLES} ${TESTOPTIONS} ${SUITES}
Run scripts again
If there is further issues downloading libraries due to the system date being out of sync with windows issue the following command and run the scripts again.
sudo hwclock --hctosys
FAQ
How to fix "Error: could not open `{argLine}'
when running unit tests from Intellij IDE ?
If not able to run unit tests from Intellj unit tests tool because of this error
Error: could not open `{argLine}' Process finished with exit code 1
Then review maven-surefire-plugin integration with Intellij:
- Go to Settings-> Build,Execution,Deployment -> Build Tools -> Maven -> Running Tests
- Uncheck argLine