Table of Contents |
---|
Launching CCSDK
SDN-R is an extension of CCSDK (Common Controller Software Development KIt), and SDN-R uses the same procedure as CCSDK to create a running instance. To begin, clone the ccsdk/distribution repository and look at the docker-compose.yml file in ccsdk/distribution/src/main/yaml.
...
And you can browse to the OpenDaylight apidoc/explorer. Note that port 8383 in the host is forwarded to port 8181 in the odlsli container, and the credentials are not the usual "admin:admin." The password is shown below in the annotated startODL.sh file below (user name: admin; password: Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U).
Working with the CCSDK containers
You can work with the CCSDK containers as you would any docker container, for example:
Code Block | ||
---|---|---|
| ||
%: docker exec -t -i ccsdk_odlsli_container /bin/bash -c 'TERM=xterm exec /bin/bash' root@744e3cc8a7fb:/# pwd / root@744e3cc8a7fb:/# echo $ODL_HOME /opt/opendaylight/current root@744e3cc8a7fb:/# echo $SDNC_CONFIG_DIR/ /opt/onap/ccsdk/data/properties/ root@744e3cc8a7fb:/# ps -elf | grep opendaylight 4 S root 1 0 0 80 0 - 1126 wait 18:34 ? 00:00:00 /bin/sh /opt/opendaylight/current/bin/karaf 0 S root 96 1 8 80 0 - 2002545 futex_ 18:34 ? 00:10:07 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Djava.security.properties=/opt/opendaylight/current/etc/odl.java.security -server -Xms128M -Xmx2048m -XX:+UnlockDiagnosticVMOptions -XX:+UnsyncloadClass -XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote -Djava.security.egd=file:/dev/./urandom -Djava.endorsed.dirs=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/endorsed:/usr/lib/jvm/java-8-openjdk-amd64/lib/endorsed:/opt/opendaylight/current/lib/endorsed -Djava.ext.dirs=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext:/usr/lib/jvm/java-8-openjdk-amd64/lib/ext:/opt/opendaylight/current/lib/ext -Dkaraf.instances=/opt/opendaylight/current/instances -Dkaraf.home=/opt/opendaylight/current -Dkaraf.base=/opt/opendaylight/current -Dkaraf.data=/opt/opendaylight/current/data -Dkaraf.etc=/opt/opendaylight/current/etc -Dkaraf.restart.jvm.supported=true -Djava.io.tmpdir=/opt/opendaylight/current/data/tmp -Djava.util.logging.config.file=/opt/opendaylight/current/etc/java.util.logging.properties -Dkaraf.startLocalConsole=true -Dkaraf.startRemoteShell=true -classpath /opt/opendaylight/current/lib/boot/org.apache.karaf.diagnostic.boot-4.0.10.jar:/opt/opendaylight/current/lib/boot/org.apache.karaf.jaas.boot-4.0.10.jar:/opt/opendaylight/current/lib/boot/org.apache.karaf.main-4.0.10.jar:/opt/opendaylight/current/lib/boot/org.osgi.core-6.0.0.jar org.apache.karaf.main.Main 0 S root 1186 1164 0 80 0 - 2821 pipe_w 20:30 pts/0 00:00:00 grep --color=auto opendaylight root@744e3cc8a7fb:/# cd $ODL_HOME root@cded16733254:/opt/opendaylight# ls -l total 76 -rw-r--r-- 1 root root 1126 Apr 19 02:59 CONTRIBUTING.markdown -rw-r--r-- 1 root root 11266 Apr 19 02:59 LICENSE -rw-r--r-- 1 root root 172 Apr 19 02:59 README.markdown drwxr-xr-x 1 root root 4096 Jul 26 18:26 bin -rw-r--r-- 1 root root 76 Apr 19 02:59 build.url drwxr-xr-x 1 root root 4096 Jul 26 18:26 configuration lrwxrwxrwx 1 root root 17 Jul 26 12:47 current -> /opt/opendaylight drwxr-xr-x 3 root root 4096 Jul 26 18:26 daexim drwxr-xr-x 1 root root 4096 Jul 26 18:26 data drwxr-xr-x 2 root root 4096 Apr 19 02:59 deploy drwxr-xr-x 1 root root 4096 Jul 26 18:26 etc drwxr-xr-x 2 root root 4096 Jul 26 18:25 instances drwxr-xr-x 2 root root 4096 Jul 26 18:26 journal lrwxrwxrwx 1 root root 17 Jul 26 12:47 karaf-0.8.1 -> /opt/opendaylight -rw-r--r-- 1 root root 3 Jul 26 18:25 karaf.pid drwxr-xr-x 5 root root 4096 Apr 19 02:59 lib -rw-r--r-- 1 root root 0 Jul 26 18:25 lock drwxr-xr-x 2 root root 4096 Jul 26 18:26 snapshots drwxr-xr-x 1 root root 4096 Jul 26 13:00 system -rw-r--r-- 1 root root 1926 Apr 19 02:59 taglist.log root@cded16733254:/opt/opendaylight# ls -l bin total 3316 -rwxr-xr-x 1 root root 3231548 Apr 19 02:59 aaa-cli-jar.jar -rwxr-xr-x 1 root root 3243 Apr 19 02:59 client -rw-r--r-- 1 root root 4334 Apr 19 02:59 client.bat -rwxr-xr-x 1 root root 8328 Apr 19 02:59 configure-cluster-ipdetect.sh -rwxr-xr-x 1 root root 7388 Apr 19 02:59 configure_cluster.sh drwxr-xr-x 2 root root 4096 Apr 19 02:59 contrib -rwxr-xr-x 1 root root 722 Apr 19 02:59 custom_shard_config.txt -rw-r--r-- 1 root root 16071 Jul 26 18:26 idmtool -rwxr-xr-x 1 root root 9999 Apr 19 02:59 inc -rwxr-xr-x 1 root root 4090 Apr 19 02:59 instance -rw-r--r-- 1 root root 5364 Apr 19 02:59 instance.bat -rwxr-xr-x 1 root root 11560 Apr 19 02:59 karaf -rw-r--r-- 1 root root 16816 Apr 19 02:59 karaf.bat -rwxr-xr-x 1 root root 2924 Apr 19 02:59 set_persistence.sh -rwxr-xr-x 1 root root 2284 Apr 19 02:59 setenv -rw-r--r-- 1 root root 2330 Apr 19 02:59 setenv.bat -rwxr-xr-x 1 root root 3227 Apr 19 02:59 shell -rw-r--r-- 1 root root 4702 Apr 19 02:59 shell.bat -rwxr-xr-x 1 root root 2016 Apr 19 02:59 start -rw-r--r-- 1 root root 2495 Apr 19 02:59 start.bat -rwxr-xr-x 1 root root 1865 Apr 19 02:59 status -rw-r--r-- 1 root root 2448 Apr 19 02:59 status.bat -rwxr-xr-x 1 root root 1867 Apr 19 02:59 stop -rw-r--r-- 1 root root 2444 Apr 19 02:59 stop.bat root@744e3cc8a7fb:/opt/opendaylight/current# ./bin/client Logging in as karaf ________ ________ .__ .__ .__ __ \_____ \ ______ ____ ____ \______ \ _____ ___.__.| | |__| ____ | |___/ |_ / | \\____ \_/ __ \ / \ | | \\__ \< | || | | |/ ___\| | \ __\ / | \ |_> > ___/| | \| ` \/ __ \\___ || |_| / /_/ > Y \ | \_______ / __/ \___ >___| /_______ (____ / ____||____/__\___ /|___| /__| \/|__| \/ \/ \/ \/\/ /_____/ \/ Hit '<tab>' for a list of available commands and '[cmd] --help' for help on a specific command. Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown OpenDaylight. opendaylight-user@root> |
The entrypoint script for the odlsli container
The entrypoint script for the odlsli container is /opt/onap/ccsdk/bin/startODL.sh. Here is the home directory and bin directory for the user ccsdk.
...
And here is the startODL.sh file:
Creating the ODLSLI Container
To see how the ODLSLI container is constructed, we look at the pom.xml file in the directory ccsdk/distribution/odlsli.
ODLSLI pom.xml initialization and dependencies
Master branch, Nov 7, 2018
ODLSLI pom.xml: Preliminaries and setting the version value of the ODLSLI docker image
ODLSLI pom.xml: Prepare the directories and files for a 'docker build' command
Rather than annotate this section of the pom.xml file - which is straightforward - I summarize the actions in the text below. The pom.xml file uses three phases in the build life cycle to prepare the directory structure and files in ./target/docker-stage prior to building the docker image. They are:
- validate
- plugin: org.codehaus.groovy.maven : gmavin-plugin (described above)
- set the version of the ODLSLI docker image to be used in NEXUS_DOCKER_REPO
- Note: the name of the image is set in the property 'image.name' in the pom.xml properties section.
- plugin: maven-resources-plugin
- goal: copy-resources
- id: copy-dockerfile
- copy Docker file
- from ./src/main/docker/
- to ./target/docker-stage/
- id: copy-scripts
- copy all of the scripts (*.sh files)
- from ./src/main/scripts/
- to ./target/docker-stage/opt/onap/ccsdk/bin/
- id: copy-odl-resources
- copy the files:
idmlight.db.mv.db
org.ops4j.pax.logging.cfg
install_ccsdk.yml
ansible-sources.list
- from ./src/main/resources/
- to ./target/docker-stage/
- copy the files:
- id: copy-config
- copy the file org.ops4j.pax.logging.cfg
- from ./src/main/resources/
- to ./target/docker-stage/
- id: copy-data
- copy all of the MySQL databases (*.dump)
- from ./src/main/resources/
- to ./target/docker-stage/opt/onap/ccsdk/data/
- id: copy-properties
- copy all of the properties files (*.properties)
- from ./src/main/properties/
- to ./target/docker-stage/opt/onap/ccsdk/data/properties/
- id: copy-keystores
- copy all *.jks files
- from ./src/main/stores/
- to ./target/docker-stage/opt/onap/ccsdk/data/stores/
- id: copy-dockerfile
- goal: copy-resources
- plugin: org.codehaus.groovy.maven : gmavin-plugin (described above)
- generate-sources
- plugin: org.apache.maven.plugins : maven-dependency-plugin
- goal: unpack-dependencies
- id: "unpack features"
- unzip all of the dependencies
- from a local or remote Maven repository
- to ./target/docker-stage
- Note: all of the zipped features are rooted at the "system" folder, so they will be unzipped into the proper structure for the OpenDaylight feature repository.
- id: "unpack features"
- goal: unpack
- id: "unpack dgs"
- Unzip the zipped artifact org.onap.ccsdk.distribution : platform-logic-installer : ${project.version}
- from a local or remote Maven repository
- to ./target/docker-stage/opt/onap/ccsdk/
- Note: this unzips the artifact into the home directory of user ccsdk in the docker container
- id: "unpack dgs"
- goal: unpack-dependencies
- plugin: org.apache.maven.plugins : maven-dependency-plugin
- process-sources
- plugin: org.codehaus.mojo : exec-maven-plugin
- goal: exec
- id: "change shell permissions"
- This executes the following command in the local computer:
- find ./target/docker-stage/opt/onap/ccsdk -name "*.sh" -exec chmod +x {} \;
- I.e., make all of the bash scripts in the ccsdk home directory executable.
- id: "change shell permissions"
- goal: exec
- plugin: org.codehaus.mojo : exec-maven-plugin
ODLSLI pom.xml: Docker Profile
The "docker" profile defines the additional "package" phase in which the docker image is built. There is also a "deploy" phase in which the generated image is pushed to NEXUS_DOCKER_REPO, but this requires special privileges. We discuss this section below.
The Dockerfile
We have seen how the pom.xml file creates the ~/ccsdk/distribution/odlsli/target/docker-stage directory in preparation for building the docker image. The next step is to inspect the Dockerfile that creates the image, discussed here.
Building the docker image
Using maven
One can create the image using the maven command "mvn --activate-profiles docker clean package" or equivalently "mvn -P docker clean package." This generates these images:
...
We see that a single image (all have the same IMAGE ID) is given four tags.
Using maven and 'docker build'
By running the command 'mvn clean process-sources' and getting a BUILD SUCCESS, a properly constructed directory will be created at ~/git/ccsdk/distribution/odlsli/target/docker-stage/.
...
Code Block | ||
---|---|---|
| ||
%: docker pull ${NEXUS_DOCKER_REPO}/onap/ccsdk-odl-oxygen-image:0.3.0-SNAPSHOT 0.3.0-SNAPSHOT: Pulling from onap/ccsdk-odl-oxygen-image 95871a411089: Pull complete f7253e37cce8: Pull complete 12d05d7bd5c4: Pull complete db27ec99c6c2: Pull complete 8fd62e3405ff: Pull complete ce430a842b90: Pull complete de7dcf5d4be1: Pull complete e3de3d1054ec: Pull complete d66bd2234856: Pull complete 6be70fc7e3a6: Pull complete Digest: sha256:80da6c8e0f70d0dddd2be462634b297fc0dc5256cb93619b30a66441d1a89cb8 Status: Downloaded newer image for nexus3.onap.org:10001/onap/ccsdk-odl-oxygen-image:0.3.0-SNAPSHOT %: docker tag ${NEXUS_DOCKER_REPO}/onap/ccsdk-odl-oxygen-image:0.3.0-SNAPSHOT onap/ccsdk-odl-oxygen-image:0.3.0-SNAPSHOT %: docker images REPOSITORY TAG IMAGE ID CREATED SIZE onap/ccsdk-odl-oxygen-image 0.3.0-SNAPSHOT bb02ebe49933 8 hours ago 1.72GB nexus3.onap.org:10001/onap/ccsdk-odl-oxygen-image 0.3.0-SNAPSHOT bb02ebe49933 8 hours ago 1.72GB nexus3.onap.org:10001/onap/ccsdk-dgbuilder-image 0.3-STAGING-latest eb208aa7f163 4 days ago 1.04GB nexus3.onap.org:10001/onap/ccsdk-odlsli-image 0.3-STAGING-latest 665a42becd61 4 days ago 1.8GB mysql/mysql-server 5.6 8d97ef4de156 3 months ago 226MB |
Running the ODLSLI pom.xml file
One can now navigate to to the ~/ccsdk/distribution/odlsli/target/docker-stage directory and build the docker image as shown here. We tag the image with the name onap/sdnr:0.3.0-SNAPSHOT.
...
Alternatively, one can edit the docker-compose.yml file to use the newly created onap/sndr:0.3.0-SNAPSHOT image rather than the ccsdk-odlsli-image pulled from NEXUS_DOCKER_REPO. That will also create and launch the new SDNR container.
Creating the zip installation files for karaf features
We have seen how the karaf features for CCSDK are included in the dependencies section of the pom.xml file in ~/ccsdk/distribution/odlsli/ and that the features are referenced as files with the name structure <feature-name>-installer.<version>-repo.zip. The next step is to understand how these zip installation files are created. A good example is the "sliapi" feature, which is in the gerrit repository ccsdk/sli/core, shown here.
...
The sliapi directory contains the usual directories for a karaf feature with an additional directory "installer." This directory contains the code that creates the installation zip file that is referenced in the dependencies section of the ODLSLI pom.xml file.
Context for the installer
Anchor | ||||
---|---|---|---|---|
|
CCSDK is based on OpenDaylight and follows the recommended practices of that group. There are documented guidelines for karaf features, and another aspect is what OpenDaylight calls "component meta-features," in which several related features are grouped together to simplify their installation. For example, these features implement NETCONF in OpenDaylight:
...
So CCSDK has created a component meta-feature for each of the SLI repositories, and we will encounter commands in the Maven pom.xml files and directory and file structures to implement them.
Anchor | ||||
---|---|---|---|---|
|
Another aspect for the context of the installer is that CCSDK changed the procedure to install karaf features from Beijing to Casablanca, but some elements of the prior procedure are still present. Prior to Casablanca, all of the CCSDK features were copied into the CCSDK home directory (/opt/onap/ccsdk) in the folder '/opt/onap/ccsdk/features/,' and the entry point script in the Docker container installed them. The contents of the CCSDK home directory and features directory for Beijing are shown here.
...
And here is the 'install-feature.sh' script.
Anchor | ||||
---|---|---|---|---|
|
Below is the annotated pom.xml file, which executes three maven phases in sequence: validate, prepare-package and package. However, the commands in the code do not appear in the sequence in which they are executed. For the sake of clarity, we discuss the commands in the sequence in which are executed.
Installer pom.xml file part 0
install-feature.sh script
As we described in a previous section, the installation folder (<features.boot>) for a particular feature contains the zipped maven repository and an installation script for that feature. The script is below, and as you can see, it references the <features.respositories> and <features.boot> properties in commands sent to the karaf client.
Code Block | ||||
---|---|---|---|---|
| ||||
ODL_HOME=${ODL_HOME:-/opt/opendaylight/current} ODL_KARAF_CLIENT=${ODL_KARAF_CLIENT:-${ODL_HOME}/bin/client} INSTALLERDIR=$(dirname $0) REPOZIP=${INSTALLERDIR}/${features.boot}-${project.version}.zip if [ -f ${REPOZIP} ] then unzip -d ${ODL_HOME} ${REPOZIP} else echo "ERROR : repo zip ($REPOZIP) not found" exit 1 fi ${ODL_KARAF_CLIENT} feature:repo-add ${features.repositories} ${ODL_KARAF_CLIENT} feature:install ${features.boot} |
Installer pom.xml file part 1
Continuing with the pom.xml file in the installer module, we now discuss the command first executed in the "validate" phase.
To see the result of this command, we show the changes in the installer directory after it is executed. We begin with 'mvn clean' and then execute part one and show the result.
...
As expected, install-feature.sh has been copied into target/stage and the parameter values have been inserted into the placeholders.
Installer pom.xml file part 2
Now the command executed in the "prepare-package" phase.
And we show the result of executing the command.
...
The maven repositories have been copied into installer/target/assembly with the correct structure and properly rooted at system/, although, as mentioned, the artifact "features-sliapi" in not included.
Installer pom.xml part 3
Parts 1 and 2 have copied all of the necessary maven repositories and scripts into the correct folder structure and with the correct parameter values. The pom.xml file now zips them up.
assemble_mvnrepo_zip.xml
And after it is executed...
...
The repositories have been properly zipped up into installer/target/stage/ccsdk-sliapi-0.3.0-SNAPSHOT-repo.zip.
Installer pom.xml part 4
And the final step.
assemble_installer_zip.xml
And after it is executed...
Voilà!
Installation in Casablanca
Setting the startup features
We have described the installation procedure prior to Casablanca, and we now turn to the procedure beginning in Casablanca. As we mentioned earlier, OpenDaylight is configured in Casablanca to install the CCSDK features upon booting up rather than afterwards in a bash script, and "component meta-features" are used rather than the individual features. OpenDaylight boots up much more quickly using this procedure. Features to install at boot time are configured in $ODL_HOME/etc/org.apache.karaf.features.cfg in two parameters: featuresRepositories and featuresBoot. We begin by looking in ccsdk/distribution/odlsli/src/main/docker/Dockerfile for the FROM IMAGE of the CCSDK container (master branch, 8/25/2018)::
...
- featuresRepositories = file:${karaf.home}/etc/290021b0-51f7-4e02-8efa-007cad16f73a.xml
- featuresBoot = 0af5d86a-980c-48a9-a02d-bdac71ff8529
Startup features in the Dockerfile FROM IMAGE
These universally unique identifiers (UUIDs) identify the startup features. featuresRepositories references the file $ODL_HOME/etc/290021b0-51f7-4e02-8efa-007cad16f73a.xml, shown here.
...
No Format |
---|
opendaylight-user@root>feature:list -i Name | Version | Required | State | Repository | Description -------------------------------------+---------+----------+---------+--------------------------------------+-------------------------------------------------- aries-proxy | 4.1.5 | | Started | standard-4.1.5 | Aries Proxy aries-blueprint | 4.1.5 | | Started | standard-4.1.5 | Aries Blueprint feature | 4.1.5 | | Started | standard-4.1.5 | Features Support shell | 4.1.5 | | Started | standard-4.1.5 | Karaf Shell shell-compat | 4.1.5 | | Started | standard-4.1.5 | Karaf Shell Compatibility deployer | 4.1.5 | | Started | standard-4.1.5 | Karaf Deployer bundle | 4.1.5 | | Started | standard-4.1.5 | Provide Bundle support config | 4.1.5 | | Started | standard-4.1.5 | Provide OSGi ConfigAdmin support diagnostic | 4.1.5 | | Started | standard-4.1.5 | Provide Diagnostic support instance | 4.1.5 | | Started | standard-4.1.5 | Provide Instance support jaas | 4.1.5 | | Started | standard-4.1.5 | Provide JAAS support log | 4.1.5 | | Started | standard-4.1.5 | Provide Log support package | 4.1.5 | | Started | standard-4.1.5 | Package commands and mbeans service | 4.1.5 | | Started | standard-4.1.5 | Provide Service support system | 4.1.5 | | Started | standard-4.1.5 | Provide System support kar | 4.1.5 | | Started | standard-4.1.5 | Provide KAR (KARaf archive) support ssh | 4.1.5 | | Started | standard-4.1.5 | Provide a SSHd server on Karaf management | 4.1.5 | | Started | standard-4.1.5 | Provide a JMX MBeanServer and a set of MBeans in wrap | 0.0.0 | | Started | standard-4.1.5 | Wrap URL handler standard | 4.1.5 | | Started | standard-4.1.5 | Wrap feature describing all features part of a st 0af5d86a-980c-48a9-a02d-bdac71ff8529 | 0.0.0 | x | Started | 290021b0-51f7-4e02-8efa-007cad16f73a | |
Editing the parameters for the startup karaf features
The above is the starting point for the CCSDK image. Recall that the Dockerfile for the CCSDK image contains these commands:
...
We see that the component meta-feature ccsdk-sli-core-all contains the repository and name for all of the features in the ccsdk/sli/core repository, including sliapi.
Constructing the component meta-feature
To see how the ccsdk-sli-core-all is constructed, we inspect the features directory in ccsdk/sli/core.
...