Building of Self-Installer package from source code
OOM ONAP offline installer
Self-Installer Package Build Guide
(Community Edition)
This document is describing procedure for generating SI (self-installer) archive based on source code.
It's supposed to be triggered on server with internet connectivity and will download all artifacts required for stable/beijing deployment based on our static lists.
Procedure was completely tested on RHEL 7.4 as it's our target platform, however with small adaptations it should be applicable also for other platforms.
Part 1. Preparations
Part 2. Download artifacts for offline installer
Part 3. Populate local nexus
Part 4. Creating self-install archive:
Version Control
Version | Date | Modified by | Comment |
0.1 | 13.10.2018 | Michal Ptacek | First draft |
Contributors
Name | |
Michal Ptá?ek | m.ptacek@partner.samsung.com |
Samuli Silvius | s.silvius@partner.samsung.com |
Timo Puha | t.puha@partner.samsung.com |
Petr Ospalý | p.ospaly@partner.samsung.com |
Pawel Mentel | p.mentel@samsung.com |
Witold Kopeld | w.kopel@samsung.com |
Part 1. Preparations
We assume that procedure is executed on:
- Rhel 7.4 server with ~200G disc space, 16G+ RAM
- Internet connectivity
More-over following packages has to be installed
- screen expect nodejs git wget createrepo python2-pip patch
- docker (exact version, from centos repo)
- source code of offline installer
This can be achieved by following commands:
Register server (rhel only)
subscription-manager register --username <rhel licence name> --password <password> --auto-attach
enable EPEL
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm # enable rhel-7-server-e4s-optional-rpms in /etc/yum.repos.d/redhat.repo # to get following rpm available (icewm dependency) # fribidi x86_64 0.19.4-6.el7 rhel-7-server-e4s-optional-rpms
install following packages
yum install -y screen expect nodejs git wget createrepo python2-pip patch yum-utils
install docker
curl https://releases.rancher.com/install-docker/17.03.sh | sh
install docker alternative - both rhel and centos:
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce-17.03.3.ce-1.el7 docker-ce-selinux-17.03.3.ce-1.el7
start docker
systemctl start docker
7. download the installer
git clone https://git.onap.org/integration/devtool cd devtool git checkout remotes/origin/beijing cd onap-offline
Part 2. Download artifacts for offline installer
All artifacts should be downloaded by running following script
./bash/tools/download_offline_data_by_lists.sh
Download is as reliable as network connectivity to internet, it's highly recommended to run it in screen and save log file from this script execution for checking
if all artifacts were successfully collected. Each start and end of script call should contain timestamp in console output.
Downloading consists of 12 steps, which should be checked at the end one-by-one.
*Verify:* +Please take a look on following comments to respective parts of download script+
[Step 1/12 Download collected docker images]
[Step 2/12 Download manually collected docker images]
E.g
== pkg #143 of 163 == rancher/etc-host-updater:v0.0.3 digest:sha256:bc156a5ae480d6d6d536aa454a9cc2a88385988617a388808b271e06dc309ce8 Error response from daemon: Get https://registry-1.docker.io/v2/rancher/etc-host-updater/manifests/v0.0.3: Get https://auth.docker.io/token?scope=repository%3Arancher%2Fetc-host-updater%3Apull&service=registry.docker.io: net/http: TLS handshake timeout WARNING [!]: warning Command docker -l error pull rancher/etc-host-updater:v0.0.3 failed. Attempt: 2/5 INFO: info waiting 10s for another try... v0.0.3: Pulling from rancher/etc-host-updater b3e1c725a85f: Already exists 6a710864a9fc: Already exists d0ac3b234321: Already exists 87f567b5cf58: Already exists 16914729cfd3: Already exists 83c2da5790af: Pulling fs layer 83c2da5790af: Verifying Checksum 83c2da5790af: Download complete 83c2da5790af: Pull complete
[Step 3/12 Build own nginx image]
[Step 4/12 Save docker images from docker cache to tarfiles]
[Step 5/12 move infra related images to infra folder]
[Step 6/12 Download git repos]
E.g.
Cloning into bare repository 'github.com/rancher/community-catalog.git'... error: RPC failed; result=28, HTTP code = 0 fatal: The remote end hung up unexpectedly Cloning into bare repository 'git.rancher.io/rancher-catalog.git'... Cloning into bare repository 'gerrit.onap.org/r/testsuite/properties.git'... Cloning into bare repository 'gerrit.onap.org/r/portal.git'... Cloning into bare repository 'gerrit.onap.org/r/aaf/authz.git'... Cloning into bare repository 'gerrit.onap.org/r/demo.git'... Cloning into bare repository 'gerrit.onap.org/r/dmaap/messagerouter/messageservice.git'... Cloning into bare repository 'gerrit.onap.org/r/so/docker-config.git'...
[Step 7/12 Download http files]
[Step 8/12 Download npm pkgs]
[Step 9/12 Download bin tools]
[Step 10/12 Download rhel pkgs]
this is the step which will work on rhel only, for other platform different packages has to be downloaded.
We need just couple of rpms, but those has a lot of dependencies (mostly because of vnc).
Script is also download all perl packages from all repos, but we need around dozen of them.
Following is considered as successful run of this part:
Available: 1:net-snmp-devel-5.7.2-32.el7.i686 (rhel-7-server-rpms) net-snmp-devel = 1:5.7.2-32.el7 Available: 1:net-snmp-devel-5.7.2-33.el7_5.2.i686 (rhel-7-server-rpms) net-snmp-devel = 1:5.7.2-33.el7_5.2 Dependency resolution failed, some packages will not be downloaded. No Presto metadata available for rhel-7-server-rpms https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm: [Errno 12] Timeout on https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm: (28, 'Operation timed out after 30001 milliseconds with 0 out of 0 bytes received') Trying other mirror. Spawning worker 0 with 230 pkgs Spawning worker 1 with 230 pkgs Spawning worker 2 with 230 pkgs Spawning worker 3 with 230 pkgs Spawning worker 4 with 229 pkgs Spawning worker 5 with 229 pkgs Spawning worker 6 with 229 pkgs Spawning worker 7 with 229 pkgs Workers Finished Saving Primary metadata Saving file lists metadata Saving other metadata Generating sqlite DBs Sqlite DBs complete
[Step 11/12 Download oom]
this step is downloading oom repo into ./resources/oom and patch it using our patch file. If this step is retried after previously passing it will lead to inconsistent oom repo.
Because patch will fail and create some .rej files, which will be marked as broken during onap_deploy part later.
E.g. successful run looks like this:
Checkout base commit which will be patched Switched to a new branch 'patched_beijing' patching file kubernetes/appc/values.yaml patching file kubernetes/common/dgbuilder/templates/deployment.yaml patching file kubernetes/dcaegen2/charts/dcae-cloudify-manager/templates/deployment.yaml patching file kubernetes/dmaap/charts/message-router/templates/deployment.yaml patching file kubernetes/onap/values.yaml patching file kubernetes/policy/charts/drools/resources/config/opt/policy/config/drools/apps-install.sh patching file kubernetes/policy/charts/drools/resources/scripts/update-vfw-op-policy.sh patching file kubernetes/policy/resources/config/pe/push-policies.sh patching file kubernetes/robot/values.yaml patching file kubernetes/sdnc/charts/sdnc-ansible-server/templates/deployment.yaml patching file kubernetes/sdnc/charts/sdnc-portal/templates/deployment.yaml patching file kubernetes/uui/charts/uui-server/templates/deployment.yaml
[Step 12/12 Download sdnc-ansible-server packages]
there is again no retry logic in this part, it’s collecting packages for sdnc-ansible-server in the exactly same way how that container is doing it,
however there is a bug in upstream that image in place won’t work with those packages as old ones are not available and newer are not compatible with other stuff inside that image
Following is approximate size of all artifacts after successful download
[root@upstream-master onap-offline]# for i in `ls -1 resources/`;do du -h resources/$i | tail -1;done 126M resources/downloads 97M resources/git-repo 61M resources/http 91G resources/offline_data 36M resources/oom 638M resources/pkg
Part 3. Populate local nexus
Prereq:
Cleaned repos (ensure that we use our packages from onap.repo only)
mkdir /etc/yum.repos.d-backup && mv /etc/yum.repos.d/*.repo /etc/yum.repos.d-backup/
actual directory with local_repo.conf
./bash/tools/deploy_nexus.sh
User is prompted for local ip, where nexus should be deployed- For accessing nexus GUI, feel free to use preconfigured VNC server (adjust the firewall on the host server)
One just need to create ssh tunnel to vcnserver and connect it with vncviewer
vncserver listens on 5901 and password is onap$ ssh -NfL 1234:127.0.0.1:5901 <the host server> $ vncviewer 127.0.0.1::1234
Nexus must be accessed from one of nginx simulated domains,e.g. http://nexus3.onap.org (Because on port 80 is more services/domains, and it is distinguished by his fqdn on proxy)
VNCserver is just example how to access nexus gui, alternatively tunnel can be created. All of this is not needed if server can be reached directly.
Following stuff should be configured from nexus gui:
- Login (default: admin / admin123)
- Settings -> repositories > create repository #docker
- Select docker (hosted)
- Name: anything you want (ex: onap)
- Repository connectors: HTTP set to 8082
- Force basic authentication: UNchecked
- Enable docker V1 API: checked
- Settings -> Repositories > create repository #npm
- Select npm (hosted)
- Name: npm-private
- Deployment policy: Allow redeploy
- Settings -> Repositories -> create repository #maven
- Select maven2 (hosted)
- Name: maven2
- Deployment policy: Allow redeploy
- Settings -> Security -> Realms
- Add Docker Bearer Token Realm
- Add npm Bearer Token Realm
- Settings -> Security -> Users
- Add user docker / pwd docker
- Rights same as anonymous
docker
#Once nexus is created and configured, use following script
to load docker / npm & maven artifacts into it
./bash/tools/load_stored_offline_data.sh
- Once all artifacts are safely loaded into nexus, original directory with tar files of images
can be removed, we don't need them inside SI archive anymore
E.g
rm -rf ./resources/offline_data/docker_images_for_nexus/*
Part 4. Creating self-install archive:
Note: as nexus_data is mounted folder into nexus, we recommend to stop nexus first before launching SI archive create script
E.g.
[root@upstream-master onap-offline]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 74d50a5d5212 own_nginx "/bin/sh -c 'spawn..." 7 hours ago Up 7 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:10001->443/tcp nginx b73ae7b76d71 sonatype/nexus3 "sh -c ${SONATYPE_..." 7 hours ago Up 7 hours 8081/tcp nexus [root@upstream-master onap-offline]# docker stop b73ae7b76d71 b73ae7b76d71
SI (self-installer) package can be created with prepopulated data via following command
./bash/tools/create_si_onap_pkg.sh <package_suffix_name>
E.g. ./bash/tools/create_si_onap_pkg.sh beijing1