Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Anchor
_obu4yammp2j4
_obu4yammp2j4
Anchor
_GoBack
_GoBack
OOM ONAP offline installer
Anchor
_9adst5jv3f6y
_9adst5jv3f6y
Self-Installer Package Build Guide
Anchor
_ildxkhbjdb7z
_ildxkhbjdb7z
(Community Edition)
Anchor
_5m89r2qmvq84
_5m89r2qmvq84


This document is describing procedure for generating SI (self-installer) archive based on source code. It's supposed to be triggered on server with internet connectivity and will download all artifacts required for stable/beijing deployment based on our static lists.

Procedure was completely tested on RHEL 7.4 as it's our target platform, however with small adaptations it should be applicable also for other platforms.




Part 1. Preparations
Part 2. Download artifacts for offline installer
Part 3. Populate local nexus
Part 4 Creating self-install archive:


Anchor
_nw5r7pa7d1yk
_nw5r7pa7d1yk

Version Control

Version

Date

Modified by

Comment

0.1

13.10.2018

Michal Ptacek

First draft

 

 

 

 

 

 

 

 









Contributors

Name

Mail

Michal Ptá?ek

m.ptacek@partner.samsung.com

Samuli Slivius

s.silvius@partner.samsung.com

Timo Puha

t.puha@partner.samsung.com

Petr Ospaly

p.ospaly@partner.samsung.com

Pawel Mentel

p.mentel@samsung.com

Witold Kopeld

w.kopel@samsung.com

Anchor
_jnii4k4mcllv
_jnii4k4mcllv

Anchor
_k3napp3jkwvq
_k3napp3jkwvq
Part 1. Preparations


We assume that procedure is executed on:

...


More-over following packages has to be installed

  • screen expect nodejs git wget createrepo python2-pip patch
  • docker (exact version, from centos repo)
  • source code of offline installer


This can be achieved by following commands:

  1. Register server
    subscription-manager register --username <rhel licence name> --password <password> --auto-attach
    # enable epel for npm
    rpm -ivh {+}https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm+
    # enable rhel-7-server-e4s-optional-rpms in /etc/yum.repos.d/redhat.repo
    # to get following rpm available (icewm dependency)
    # fribidi x86_64 0.19.4-6.el7 rhel-7-server-e4s-optional-rpms
    # install following packages
    yum install -y screen expect nodejs git wget createrepo python2-pip patch
  2. install docker
    curl https://releases.rancher.com/install-docker/17.03.sh | sh
  3. git clone offline installer source code
    ssh://<git username>@106.120.118.76:29418/onap (take it from integration/devtools repo)
    Anchor
    _9i96cnscrdvw
    _9i96cnscrdvw

    Anchor
    _988hh695oslh
    _988hh695oslh
    Part 2. Download artifacts for offline installer

    Wiki Markup
    All artifacts should be downloaded by running following script
    \\
    $ ./bash/tools/download_offline_data_by_lists.sh
    !worddav8a2a0708ad42f63de823c674420d76d6.png|height=60,width=60!
    Download is as reliable as network connectivity to internet, it's highly recommended to run it in screen and save log file from this script execution for checking if all artifacts were successfully collected. Each start and end of script call should contain timestamp in console output. Downloading consists of 12
              steps, which should be checked at the end one-by-one.
    *Verify:*  +Please take a look on following comments to respective parts of download script+
    \\
    \[Step 1/12 Download collected docker images\]
    \[Step 2/12 Download manually collected docker images\]
    => both image download steps are quite reliable and contain retry logic
    \\
    E.g
    == pkg #143 of 163 ==
    rancher/etc-host-updater:v0.0.3 digest:sha256:bc156a5ae480d6d6d536aa454a9cc2a88385988617a388808b271e06dc309ce8
    Error response from daemon: Get https://registry-1.docker.io/v2/rancher/etc-host-updater/manifests/v0.0.3: Get https://auth.docker.io/token?scope=repository%3Arancher%2Fetc-host-updater%3Apull&service=registry.docker.io: net/http: TLS handshake timeout
    WARNING \[!\]: warning Command docker -l error pull rancher/etc-host-updater:v0.0.3 failed. Attempt: 2/5
    INFO: info waiting 10s for another try...
    v0.0.3: Pulling from rancher/etc-host-updater
    b3e1c725a85f: Already exists
    6a710864a9fc: Already exists
    d0ac3b234321: Already exists
    87f567b5cf58: Already exists
    16914729cfd3: Already exists
    83c2da5790af: Pulling fs layer
    83c2da5790af: Verifying Checksum
    83c2da5790af: Download complete
    83c2da5790af: Pull complete
    \\
    \[Step 3/12 Build own nginx image\]
    => there is no hardening in this step, if it failed needs to be retriggered. It should end with "Successfully built <id>"
    \\
    \[Step 4/12 Save docker images from docker cache to tarfiles\]
    => quite reliable, retry logic in place
    \\
    \[Step 5/12 move infra related images to infra folder\]
    => should be safe, precondition is not failing step(3)
    \\
    \[Step 6/12 Download git repos\]
    => potentially unsafe, no hardening in place. If it not download all git repos. It has to be executed again. Easiest way is probably to comment-out other steps in load script and run it again.
    \\
    E.g.
    Cloning into bare repository 'github.com/rancher/community-catalog.git'...
    error: RPC failed; result=28, HTTP code = 0
    fatal: The remote end hung up unexpectedly
    Cloning into bare repository 'git.rancher.io/rancher-catalog.git'...
    Cloning into bare repository 'gerrit.onap.org/r/testsuite/properties.git'...
    Cloning into bare repository 'gerrit.onap.org/r/portal.git'...
    Cloning into bare repository 'gerrit.onap.org/r/aaf/authz.git'...
    Cloning into bare repository 'gerrit.onap.org/r/demo.git'...
    Cloning into bare repository 'gerrit.onap.org/r/dmaap/messagerouter/messageservice.git'...
    Cloning into bare repository 'gerrit.onap.org/r/so/docker-config.git'...
    \\
    \[Step 7/12 Download http files\]
    \[Step 8/12 Download npm pkgs\]
    \[Step 9/12 Download bin tools\]
    => work quite reliably, If it not download all artifacts. Easiest way is probably to comment-out other steps in load script and run it again.
    \\
    \[Step 10/12 Download rhel pkgs\]
    => this is the step which will work on rhel only, for other platform different packages has to be downloaded. We need just couple of rpms, but those has a lot of dependencies (mostly because of vnc). Script is also download all perl packages from all repos, but we need around dozen of them.
    \\
    Following is considered as sucessfull run of this part:
    \\
        Available: 1:net-snmp-devel-5.7.2-32.el7.i686 (rhel-7-server-rpms)
            net-snmp-devel = 1:5.7.2-32.el7
        Available: 1:net-snmp-devel-5.7.2-33.el7_5.2.i686 (rhel-7-server-rpms)
            net-snmp-devel = 1:5.7.2-33.el7_5.2
    Dependency resolution failed, some packages will not be downloaded.
    No Presto metadata available for rhel-7-server-rpms
    https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm: \[Errno 12\] Timeout on https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm:
     (28, 'Operation timed out after 30001 milliseconds with 0 out of 0 bytes received')
    Trying other mirror.
    Spawning worker 0 with 230 pkgs
    Spawning worker 1 with 230 pkgs
    Spawning worker 2 with 230 pkgs
    Spawning worker 3 with 230 pkgs
    Spawning worker 4 with 229 pkgs
    Spawning worker 5 with 229 pkgs
    Spawning worker 6 with 229 pkgs
    Spawning worker 7 with 229 pkgs
    Workers Finished
    Saving Primary metadata
    Saving file lists metadata
    Saving other metadata
    Generating sqlite DBs
    Sqlite DBs complete
    \\
    \\
    \\
    \[Step 11/12 Download oom\]
    => this step is downloading oom repo into ./resources/oom and patch it using our patch file. If this step is retried after previously passing it will lead to inconsistent oom repo. Because patch will fail and create some .rej files, which will be marked as broken during onap_deploy part later.
    \\
    \\
    \\
    E.g. successful run looks like this:
    \\
    Checkout base commit which will be patched
    Switched to a new branch 'patched_beijing'
    patching file kubernetes/appc/values.yaml
    patching file kubernetes/common/dgbuilder/templates/deployment.yaml
    patching file kubernetes/dcaegen2/charts/dcae-cloudify-manager/templates/deployment.yaml
    patching file kubernetes/dmaap/charts/message-router/templates/deployment.yaml
    patching file kubernetes/onap/values.yaml
    patching file kubernetes/policy/charts/drools/resources/config/opt/policy/config/drools/apps-install.sh
    patching file kubernetes/policy/charts/drools/resources/scripts/update-vfw-op-policy.sh
    patching file kubernetes/policy/resources/config/pe/push-policies.sh
    patching file kubernetes/robot/values.yaml
    patching file kubernetes/sdnc/charts/sdnc-ansible-server/templates/deployment.yaml
    patching file kubernetes/sdnc/charts/sdnc-portal/templates/deployment.yaml
    patching file kubernetes/uui/charts/uui-server/templates/deployment.yaml
    \\
    \\
    \[Step 12/12 Download sdnc-ansible-server packages\]
    => there is again no retry logic in this part, it's collecting packages for sdnc-ansible-server in the exactly same way how that container is doing it, however there is a bug in upstream that image in place won't work with those packages as old ones are not available and newer are not compatible with other stuff inside that image
    \\
    \\
    +Following is approximate size of all artifacts after successful download+
    \\
    \[root@upstream-master onap-offline\]# for i in `ls -1 resources/`;do du -h resources/$i | tail -1;done
    126M    resources/downloads
    97M     resources/git-repo
    61M     resources/http
    91G     resources/offline_data
    36M     resources/oom
    638M    resources/pkg
    \\

    Anchor
    _a701bwfyn06x
    _a701bwfyn06x
    Part 3. Populate local nexus

    Prereq:
  • Cleaned repos (ensure that we use our packages from onap.repo only)

rm -f /etc/yum.repos.d/*.repo

  1. actual directory with local_repo.conf
    $ ./bash/tools/deploy_nexus.sh

    User is prompted for local ip, where nexus should be deployed

  2. For accessing nexus GUI, feel free to use preconfigured VNC server
  3. One just need to create ssh tunnel to vcnserver and connect it with vncviewer
  4. password is onap
    E.g. ssh -i ~/michal1_new_key 106.120.119.123 -L 1234:127.0.0.1:5901
    Nexus must be accessed from one of nginx simulated domains, e.g. {+}http://nexus3.onap.org/+(Because on port 80 is more services/domains, and it is distinguished by his fqdn on proxy)
    Following stuff should be configured from nexus gui:
  • Login (default: admin / admin123)
  • Settings -> repositories > create repository #docker
    • Select docker (hosted)
    • Name: anything you want (ex: onap)
    • Repository connectors: HTTP set to 8082
    • Force basic authentication: UNchecked
    • Enable docker V1 API: checked
  • Settings -> Repositories > create repository #npm
    • Select npm (hosted)
    • Name: npm-private
    • Deployment policy: Allow redeploy
  • Settings -> Repositories -> create repository #maven
    • Select maven2 (hosted)
    • Name: maven2
    • Deployment policy: Allow redeploy
  • Settings -> Security -> Realms
    • Add Docker Bearer Token Realm
    • Add npm Bearer Token Realm
  • Settings -> Security -> Users
    • Add user docker / pwd docker
    • Rights same as anonymous

...

  1. to load docker / npm & maven artifacts into it
    $ ./bash/tools/load_stored_offline_data.sh
  2. Once all artifacts are safely loaded into nexus, original directory with tar files of images
  3. can be removed, we don't need them inside SI archive anymore
    E.g
    rm -rf ./resources/offline_data/docker_images_for_nexus/*
    Anchor
    _vnuouonjedsv
    _vnuouonjedsv

    Anchor
    _cm1dw6eiwtv9
    _cm1dw6eiwtv9
    Part 4. Creating self-install archive:

    Wiki Markup
    \\
    *Note:*  as nexus_data is mounted folder into nexus, we recommend to stop nexus first before launching SI archive create script
    \\
    E.g.
    \\
    \[root@upstream-master onap-offline\]# docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                              NAMES
    74d50a5d5212        own_nginx           "/bin/sh -c 'spawn..."   7 hours ago         Up 7 hours          0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:10001->443/tcp   nginx
    b73ae7b76d71        sonatype/nexus3     "sh -c $\{SONATYPE_..."   7 hours ago         Up 7 hours          8081/tcp                                                           nexus
    \\
    \[root@upstream-master onap-offline\]# docker stop b73ae7b76d71
    b73ae7b76d71
    \[root@upstream-master onap-offline\]#
    \\
    \\
    SI (self-installer) package can be created with prepopulated data via following command
    \\
    ./bash/tools/create_si_onap_pkg.sh <package_suffix_name>
    \\
    E.g. ./bash/tools/create_si_onap_pkg.sh  beijing1
     \\
    *Notes / hints:*
    \\
    *1) Install root CA on another server*
    \\
    NOTE: not needed, it will be done automatically The script ./install_cacert.sh is generated automatically during deploying nexus.
    Copy him to the an server and execute him. That is all.
    \\
    *Alternative{*}: If the file does not exists, you can create him by call
    $ ./bash/tools/create_si_cacert_pkg.sh
    The self-install certification script will be created in project directory (level up from ./tools dir)
    But the CA certificate must already exists in ./live/certs/rootCAcert.crt
    \\
    *2) Modify/Add simulated domains*
    \\
    Add records to 

...

But keep the root CA cert (or you must copy him again on all machines)