Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Anchor_obu4yammp2j4_obu4yammp2j4 Anchor_GoBack_GoBackOOM ONAP offline installer
Anchor_9adst5jv3f6y_9adst5jv3f6ySelf-Installer Package Build Guide
Anchor_ildxkhbjdb7z_ildxkhbjdb7z(Community Edition)
Anchor_5m89r2qmvq84_5m89r2qmvq84 Image Removed
This document is describing procedure for generating SI (self-installer) archive based on source code. It's supposed to be triggered on server with internet connectivity and will download all artifacts required for stable/beijing deployment based on our static lists.
Image Removed
Procedure was completely tested on RHEL 7.4 as it's our target platform, however with small adaptations it should be applicable also for other platforms.
Part 1. Preparations
Part 2. Download artifacts for offline installer
Part 3. Populate local nexus
Part 4 Creating self-install archive:
Anchor_nw5r7pa7d1yk_nw5r7pa7d1yk Version Control

...

Version

...

Date

...

Modified by

...

Comment

...

0.1

...

13.10.2018

...

Michal Ptacek

...

First draft

Contributors

Name

Mail

Michal Ptá?ek

m.ptacek@partner.samsung.com

Samuli Slivius

s.silvius@partner.samsung.com

Timo Puha

t.puha@partner.samsung.com

Petr Ospaly

p.ospaly@partner.samsung.com

Pawel Mentel

p.mentel@samsung.com

Witold Kopeld

w.kopel@samsung.com

...

  • Rhel 7.4 server with ~200G disc space, 16G+ RAM
  • Internet connectivity

...

  • screen expect nodejs git wget createrepo python2-pip patch
  • docker (exact version, from centos repo)
  • source code of offline installer

...

Anchor
_obu4yammp2j4
_obu4yammp2j4
Anchor
_GoBack
_GoBack
OOM ONAP offline installer
Anchor
_9adst5jv3f6y
_9adst5jv3f6y
Self-Installer Package Build Guide
Anchor
_ildxkhbjdb7z
_ildxkhbjdb7z
(Community Edition)
Anchor
_5m89r2qmvq84
_5m89r2qmvq84
Image Added

This document is describing procedure for generating SI (self-installer) archive based on source code.

It's supposed to be triggered on server with internet connectivity and will download all artifacts required for stable/beijing deployment based on our static lists.
Procedure was completely tested on RHEL 7.4 as it's our target platform, however with small adaptations it should be applicable also for other platforms.


Part 1. Preparations
Part 2. Download artifacts for offline installer
Part 3. Populate local nexus
Part 4. Creating self-install archive:


Anchor
_nw5r7pa7d1yk
_nw5r7pa7d1yk

Version Control

Version

Date

Modified by

Comment

0.1

13.10.2018

Michal Ptacek

First draft









Contributors

Name

Mail

Michal Ptá?ek

m.ptacek@partner.samsung.com

Samuli Silvius

s.silvius@partner.samsung.com

Timo Puha

t.puha@partner.samsung.com

Petr Ospalý

p.ospaly@partner.samsung.com

Pawel Mentel

p.mentel@samsung.com

Witold Kopeld

w.kopel@samsung.com

Anchor
_jnii4k4mcllv
_jnii4k4mcllv

Anchor
_k3napp3jkwvq
_k3napp3jkwvq
Part 1. Preparations


We assume that procedure is executed on:

  • Rhel 7.4 server with ~200G disc space, 16G+ RAM
  • Internet connectivity


More-over following packages has to be installed

  • screen expect nodejs git wget createrepo python2-pip patch
  • docker (exact version, from centos repo)
  • source code of offline installer


This can be achieved by following commands:

  1. Register server (rhel only)

    Code Block
     subscription-manager register --username <rhel licence name> --password <password> --auto-attach



  2. enable EPEL

    Code Block
    yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    # enable rhel-7-server-e4s-optional-rpms in /etc/yum.repos.d/redhat.repo
    # to get following rpm available (icewm dependency)
    # fribidi x86_64 0.19.4-6.el7 rhel-7-server-e4s-optional-rpms
    in /etc/yum.repos.d/redhat.repo
    # to get following rpm available (icewm dependency)
    # fribidi x86_64 0.19.4-6.el7 rhel-7-server-e4s-optional-rpms
    #
     



  3. install following packages

    $


    Code Block
    yum install -y screen expect nodejs git wget createrepo python2-pip patch yum-utils



  4. install docker

    $


    Code Block
    curl https://releases.rancher.com/install-docker/17.03.sh | sh



  5. install docker alternative - both rhel and centos:

    $


    Code Block
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

    $
    
    yum install docker-ce-17.03.3.ce-1.el7 docker-ce-selinux-17.03.3.ce-1.el7



  6. start docker

    Code Block
    $
    systemctl start docker



7. download the installer

...


Code Block
git clone  https://git.onap.org/integration/devtool

...


cd devtool

...


git checkout remotes/origin/beijing

...


cd onap-offline


Anchor
_9i96cnscrdvw
_9i96cnscrdvw

...



...

Part 2. Download artifacts for offline installer

...

All

...

artifacts

...

should

...

be

...

downloaded

...

by

...

running

...

following

...

script

Code Block
 ./bash/tools/download_offline_data_by_lists.sh

...

Download is as reliable as network connectivity to internet, it's highly recommended to run it in screen and save log file from this script execution for checking

...

if all artifacts were successfully collected. Each start and end of script call should contain timestamp in console output. 

Downloading

...

consists

...

of

...

12 steps, which should be checked at the end one-by-one.

...

 *Verify:*

...

+Please

...

take

...

a

...

look

...

on

...

following

...

comments

...

to

...

respective

...

parts

...

of

...

download

...

script+

...

[Step

...

1/12

...

Download

...

collected

...

docker

...

images

...

]

...

[Step

...

2/12

...

Download

...

manually

...

collected

...

docker

...

images

...

]

Info
both image download steps are quite reliable and contain retry logic

E.g


Code Block
== pkg #143 of 163 ==
rancher/etc-host-updater:v0.0.3 digest:sha256:bc156a5ae480d6d6d536aa454a9cc2a88385988617a388808b271e06dc309ce8
Error response from daemon: Get https://registry-1.docker.io/v2/rancher/etc-host-updater/manifests/v0.0.3: Get https://auth.docker.io/token?scope=repository%3Arancher%2Fetc-host-updater%3Apull&service=registry.docker.io: net/http: TLS handshake timeout
WARNING 

...

[!

...

]: warning Command docker -l error pull rancher/etc-host-updater:v0.0.3 failed. Attempt: 2/5
INFO: info waiting 10s for another try...
v0.0.3: Pulling from rancher/etc-host-updater
b3e1c725a85f: Already exists
6a710864a9fc: Already exists
d0ac3b234321: Already exists
87f567b5cf58: Already exists
16914729cfd3: Already exists
83c2da5790af: Pulling fs layer
83c2da5790af: Verifying Checksum
83c2da5790af: Download complete
83c2da5790af: Pull complete

...



[Step

...

3/12

...

Build

...

own

...

nginx

...

image

...

]

Note
there is no hardening in this step, if it failed needs to be retriggered. It should end with “Successfully built <id>”


[Step 4/12 Save docker images from docker cache to tarfiles]

Info
quite reliable, retry logic in place

[Step 5/12 move infra related images to infra folder]

Info
should be safe, precondition is not failing step(3)

[Step 6/12 Download git repos]

Note
potentially unsafe, no hardening in place. If it not download all git repos. It has to be executed again. Easiest way is probably to comment-out other steps in load script and run it again.

E.g.


Code Block
Cloning into bare repository 'github.com/rancher/community-catalog.git'...
error: RPC failed; result=28, HTTP code = 0
fatal: The remote end hung up unexpectedly
Cloning into bare repository 'git.rancher.io/rancher-catalog.git'...
Cloning into bare repository 'gerrit.onap.org/r/testsuite/properties.git'...
Cloning into bare repository 'gerrit.onap.org/r/portal.git'...
Cloning into bare repository 'gerrit.onap.org/r/aaf/authz.git'...
Cloning into bare repository 'gerrit.onap.org/r/demo.git'...
Cloning into bare repository 'gerrit.onap.org/r/dmaap/messagerouter/messageservice.git'...
Cloning into bare repository 'gerrit.onap.org/r/so/docker-config.git'...

...



[Step

...

7/12

...

Download

...

http

...

files

...

]

...

[Step

...

8/12

...

Download

...

npm

...

pkgs

...

]

...

[Step

...

9/12

...

Download

...

bin

...

tools

...

]

Info
work quite reliably,

...

If

...

it

...

not

...

download

...

all

...

artifacts.

...

Easiest

...

way

...

is

...

probably

...

to

...

comment-out

...

other

...

steps

...

in

...

load

...

script

...

and

...

run

...

it

...

again.

...

[Step

...

10/12

...

Download

...

rhel

...

pkgs

...

]


Note

this is the step which will work on rhel only, for other platform different packages has to be downloaded.

We need just couple of rpms, but those has a lot of dependencies (mostly because of vnc).

Script is also download all perl packages from all repos, but we need around dozen of them.



Following is considered as successful run of this part:


Code Block
    Available: 1:net-snmp-devel-5.7.2-32.el7.i686 (rhel-7-server-rpms)
       

...

net-snmp-devel = 1:5.7.2-32.el7
   

...

Available: 1:net-snmp-devel-5.7.2-33.el7_5.2.i686 (rhel-7-server-rpms)
       

...

net-snmp-devel = 1:5.7.2-33.el7_5.2
Dependency resolution failed, some packages will not be downloaded.
No Presto metadata available for rhel-7-server-rpms
https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm: 

...

[Errno 12

...

] Timeout on https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm:

...

(28, 'Operation timed out after 30001 milliseconds with 0 out of 0 bytes received')
Trying other mirror.
Spawning worker 0 with 230 pkgs
Spawning worker 1 with 230 pkgs
Spawning worker 2 with 230 pkgs
Spawning worker 3 with 230 pkgs
Spawning worker 4 with 229 pkgs
Spawning worker 5 with 229 pkgs
Spawning worker 6 with 229 pkgs
Spawning worker 7 with 229 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete

...




[Step

...

11/12

...

Download

...

oom

...

]


Note

this step is downloading oom repo into ./resources/oom

...

and

...

patch

...

it

...

using

...

our

...

patch

...

file.

...

If

...

this

...

step

...

is

...

retried

...

after

...

previously

...

passing

...

it

...

will

...

lead

...

to

...

inconsistent

...

oom

...

repo.

...

Because

...

patch

...

will

...

fail

...

and

...

create

...

some

...

.rej

...

files,

...

which

...

will

...

be

...

marked

...

as

...

broken

...

during

...

onap_deploy

...

part

...

later.

...



E.g.

...

successful

...

run

...

looks

...

like

...

this:

...


Code Block
Checkout base commit which will be patched
Switched to a new branch 'patched_beijing'
patching file kubernetes/appc/values.yaml
patching file kubernetes/common/dgbuilder/templates/deployment.yaml
patching file kubernetes/dcaegen2/charts/dcae-cloudify-manager/templates/deployment.yaml
patching file kubernetes/dmaap/charts/message-router/templates/deployment.yaml
patching file kubernetes/onap/values.yaml
patching file kubernetes/policy/charts/drools/resources/config/opt/policy/config/drools/apps-install.sh
patching file kubernetes/policy/charts/drools/resources/scripts/update-vfw-op-policy.sh
patching file kubernetes/policy/resources/config/pe/push-policies.sh
patching file kubernetes/robot/values.yaml
patching file kubernetes/sdnc/charts/sdnc-ansible-server/templates/deployment.yaml
patching file kubernetes/sdnc/charts/sdnc-portal/templates/deployment.yaml
patching file kubernetes/uui/charts/uui-server/templates/deployment.yaml

...



[Step

...

12/12

...

Download

...

sdnc-ansible-server

...

packages

...

]


Note

there is again no retry logic in this part, it’s collecting packages for sdnc-ansible-server

...

in

...

the

...

exactly

...

same

...

way

...

how

...

that

...

container

...

is

...

doing

...

it,

...

however

...

there

...

is

...

a

...

bug

...

in

...

upstream

...

that

...

image

...

in

...

place

...

won’t work

...

with

...

those

...

packages

...

as

...

old

...

ones

...

are

...

not

...

available

...

and

...

newer

...

are

...

not

...

compatible

...

with

...

other

...

stuff

...

inside

...

that

...

image



Following is approximate size of all artifacts after successful download


Code Block
[root@upstream-master onap-offline

...

]# for i in `ls -1 resources/`;do du -h resources/$i | tail -1;done
126M    resources/downloads
97M     resources/git-repo
61M     resources/http
91G     resources/offline_data
36M     resources/oom
638M    resources/pkg

...

Anchor
_a701bwfyn06x
_a701bwfyn06x


Anchor
_a701bwfyn06x
_a701bwfyn06x
Part 3. Populate local nexus

Prereq:

  • Cleaned repos (ensure that we use our packages from onap.repo only)

    Code Block
    $
    mkdir /etc/yum.repos.d-backup && mv /etc/yum.repos.d/*.repo /etc/yum.repos.d-backup/
  1. actual directory with local_repo.conf

    Code Block
    $
    ./bash/tools/deploy_nexus.sh 
    Image Removed


    User is prompted for local ip, where nexus should be deployed

  2. For accessing nexus GUI, feel free to use preconfigured VNC server (adjust the firewall on the host server)

  3. One just need to create ssh tunnel to vcnserver and connect it with vncviewer
    vncserver listens on 5901 and password is onap

    Code Block
    $ ssh -NfL 1234:127.0.0.1:5901 <the host server>

    
    $ vncviewer 127.0.0.1::1234


    Nexus must be accessed from one of nginx simulated domains,

    e.g. http://nexus3.onap.org (Because on port 80 is more services/domains, and it is distinguished by his fqdn on proxy)

    Following stuff should be configured from nexus gui:
    Note
    VNCserver is just example how to access nexus gui, alternatively tunnel can be created. All of this is not needed if server can be reached directly.



    Following stuff should be configured from nexus gui:

...

  • Login (default: admin / admin123)
  • Settings -> repositories > create repository #docker
    • Select docker (hosted)
    • Name: anything you want (ex: onap)
    • Repository connectors: HTTP set to 8082
    • Force basic authentication: UNchecked
    • Enable docker V1 API: checked
  • Settings -> Repositories > create repository #npm
    • Select npm (hosted)
    • Name: npm-private
    • Deployment policy: Allow redeploy
  • Settings -> Repositories -> create repository #maven
    • Select maven2 (hosted)
    • Name: maven2
    • Deployment policy: Allow redeploy
  • Settings -> Security -> Realms
    • Add Docker Bearer Token Realm
    • Add npm Bearer Token Realm
  • Settings -> Security -> Users
    • Add user docker / pwd docker
    • Rights same as anonymous

...

    • )
    • Name: npm-private
    • Deployment policy: Allow redeploy
  • Settings -> Repositories -> create repository #maven
    • Select maven2 (hosted)
    • Name: maven2
    • Deployment policy: Allow redeploy
  • Settings -> Security -> Realms
    • Add Docker Bearer Token Realm
    • Add npm Bearer Token Realm
  • Settings -> Security -> Users
    • Add user docker / pwd docker
    • Rights same as anonymous

docker
Image Added


#Once nexus is created and configured, use following script

  1. to load docker / npm & maven artifacts into it

    Code Block
    ./bash/tools/load_stored_offline_data.sh



  2. Once all artifacts are safely loaded into nexus, original directory with tar files of images


can be removed, we don't need them inside SI archive anymore
E.g 


Code Block
rm -rf ./resources/offline_data/docker_images_for_nexus/* 




Anchor
_vnuouonjedsv
_vnuouonjedsv

Anchor
_cm1dw6eiwtv9
_cm1dw6eiwtv9

Part 4. Creating self-install archive:



Note:  as nexus_data is mounted folder into nexus, we recommend to stop nexus first before launching SI archive create script

E.g.


Code Block
[root@upstream-master onap-offline]# docker ps
CONTAINER ID        IMAGE 

...

COMMAND 

...

 

...

         

...

       CREATED STATUS PORTS                                                

...

  • ./cfg/v3.ext
  • ./bash/tools/common-functions.sh // variable SIMUL_HOSTS
  • ./cfg/nginx.conf // see below

Add record to nginx
Add record to nginx ./cfg/nginx.conf
This file is mounted inside nginx container, so only container restart is needed.
Find section which you need (nexus, git)
In the section update variable named server_name.
If you updating git, then you must also create bare repo in ./resources/git-repo/
Finish

  • Recreate nexus_server.crt

$ ./bash/tools/certificates/2create_cert_for_nginx.sh

  • Restart nginx container

...

              NAMES
74d50a5d5212        own_nginx "/bin/sh -c 'spawn..."   7 hours ago Up 7 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:10001->443/tcp   nginx
b73ae7b76d71        sonatype/nexus3 "sh -c ${SONATYPE_..."   7 hours ago Up 7 hours 8081/tcp                                                           nexus

[root@upstream-master onap-offline]# docker stop b73ae7b76d71
b73ae7b76d71

SI (self-installer) package can be created with prepopulated data via following command

Code Block
./bash/tools/create_si_onap_pkg.sh <package_suffix_name>

E.g. ./bash/tools/create_si_onap_pkg.sh  beijing1