How to create a new pipeline (with new ONAP version) ?
The Daily chains are created to deploy and test ONAP.
Since the creation of the daily chains, several chains have been declared:
daily_frankfurt
daily_guilin
daily_honolulu
daily_istanbul
daily_master
Usually we keep the master and the last stable, but we could imagine if we would have enough resource to keep more versions..
How to create a new daily chain...
1) Declare the chain in chained-ci inventory
All the chains must be declared in https://gitlab.com/Orange-OpenSource/lfn/ci_cd/chained-ci/-/blob/master/pod_inventory/inventory
As everything is ansible the chains must be declared in the inventory. Note we improved the system with collections but had not time to share with the community.
inventory
[orange_terrahouat]
oom-offline
openacumos
acumos_sandbox
onap_oom_gating_k8s_pod4_4
onap_oom_gating_k8s_pod4_3
onap_oom_gating_k8s_pod4_2
onap_oom_gating_k8s_pod4_1
onap_oom_gating_vnfs_pod4_4
onap_oom_gating_vnfs_pod4_3
onap_oom_gating_vnfs_pod4_2
onap_oom_gating_vnfs_pod4_1
onap_oom_gating_pod4_4
onap_oom_gating_pod4_3
onap_oom_gating_pod4_2
onap_oom_gating_pod4_1
onap_daily_pod4_k8s_master
onap_daily_pod4_master
onap_weekly_pod4_k8s_master
onap_weekly_pod4_master
onap_daily_pod4_k8s_ingress_master
onap_daily_pod4_ingress_master
onap_xtesting_k8s
onap_pod4_k8s_service_mesh_master
rke_daily_pod4
rke2_daily_pod4
kubespray_daily_pod4
harbor_server
onap_oom_pod4_sm_master
oronap_oom_gating_k8s_pod4_1
hardening_centos_pod4
onap_daily_pod4_k8s_test
onap_daily_pod4_test
onap_weekly_pod4_k8s_honolulu
onap_weekly_pod4_honolulu
onap_daily_pod4_k8s_honolulu
onap_daily_pod4_honolulu
onap_daily_pod4_k8s_istanbul
onap_daily_pod4_istanbul
onap_weekly_pod4_k8s_istanbul
onap_weekly_pod4_istanbul
new_k8s_daily
new_onap_daily
[azure]
onap_oom_gating_k8s_azure_3
onap_oom_gating_k8s_azure_4
onap_oom_staging_k8s_azure_1
onap_oom_gating_azure_3
onap_oom_gating_azure_4
onap_oom_staging_azure_1
In the example above I declared 2 new chains: new_k8s_daily and new_onap_daily.
Note that you can declare only one but for the daily we are used to redeploy the k8s prior to the ONAP deployment - we always restart from scratch. it allows to see any regression on OOM/k8s.
2) Create the chains
Once declare you must create several files
Let's start with the host vars https://gitlab.com/Orange-OpenSource/lfn/ci_cd/chained-ci/-/tree/master/pod_inventory/host_vars.
You must create the files: new_k8s_daily.yml and new_onap_daily.yml
These files describe the chain you want to setup
Let's consider new_k8s_daily.yml, it could look like
new_k8s
---
jumphost:
server: rebond.opnfv.fr
user: !vault |
$ANSIBLE_VAULT;1.1;AES256
3434613036643437336466623463383762363761656564383535373037353066363563666634
33633330623732333364666266316630363162333532663666380a3030386264383034643239
3837626339336334313034623561396365626635656565353061666437393330633464333130
3035366639373863643130346133620a32646230636637656338623835306330663036636235
66623232333464643766
environment: orange_pod4/k8s_master
scenario_steps:
config:
project: config
get_artifacts:
- name: orange_vim_pod4
static_src: true
infra: k8s18-new-daily
ssh_access: orange.eyml
infra_deploy:
project: os_infra_manager
get_artifacts: config
extra_parameters:
ADMIN: true
CLEAN: true
TENANT_NAME: new-daily
USER_NAME: new-daily-ci
IDENTIFIER: -new-daily
USE_PRIVATE_IP: True
ADD_FLOATING_IP: True
DNS_NAME: "{{ lookup('env','DNS_NAME') | default('master', true) }}"
k8s_deploy:
get_artifacts: infra_deploy
project: kubespray
branch: helm_3
ssh_access: orange.eyml
extra_parameters:
kubespray_version: release-2.18
helm_release: v3.6.4
kube_network_plugin: cilium
kubernetes_release: v1.22.4
ENABLE_MONITORING: false
DOCKER_HUB_PROXY: docker.nexus.azure.onap.eu
GCR_PROXY: docker.nexus.azure.onap.eu
K8S_GCR_PROXY: docker.nexus.azure.onap.eu
QUAY_PROXY: docker.nexus.azure.onap.eu
trigger:
project: trigger
k8s_test:
project: functest_k8s
get_artifacts:
- name: infra_deploy
limit_to:
- inventory/infra: inventory/infra
- name: config
limit_to:
- vars/pdf.yml: vars/pdf.yml
- vars/ssh_gateways.yml: vars/ssh_gateways.yml
- vars/vaulted_ssh_credentials.yml: vars/vaulted_ssh_credentials.yml
- name: k8s_deploy
limit_to:
- vars/kube-config: vars/kube-config
extra_parameters:
DEPLOYMENT: kubespray
TEST_RESULT_DB_URL: http://testresults.opnfv.org/test/api/v1/results
There are lots of information in this file, it describes the different stages of the kubernetes installation: config (init), infra_deploy (creation of the VMs on Orange OpenStack), k8s_deploy (deployement of the kubernetes), k8s_test (kubernetes testing).
For each stage we may change some parameters..
on the config part: the full configration of the VM is indicated through the parameter infra: k8s18-new-daily (see next section)
on the VM creation for instance we can change the name of the tenants/vms/users/...
on the kubernetes installation we can change the versions of kubespray, helm, the network plugin,the kubernetes release...This section must be in line with the OOM recommendations
We also do precise the docker repository (here we use our internal mirror)
Same for the ONAP installation (chained once the kubernetes is installed)
new_onap
---
jumphost:
server: rebond.opnfv.fr
user: !vault |
$ANSIBLE_VAULT;1.1;AES256
3434613036643437336466623463383762363761656564383535373037353066363563666634
33633330623732333364666266316630363162333532663666380a3030386264383034643239
3837626339336334313034623561396365626635656565353061666437393330633464333130
3035366639373863643130346133620a32646230636637656338623835306330663036636235
66623232333464643766
environment: orange_pod4/k8s_master/onap_daily
inpod: onap_daily_pod4_k8s_master
scenario_steps:
config:
project: config
get_artifacts:
- name: orange_vim_pod4
static_src: true
infra: onap-vnfs
ssh_access: orange.eyml
vnf_project_deploy:
project: os_infra_manager
get_artifacts: config
extra_parameters:
ADMIN: true
CLEAN: true
TENANT_NAME: onap-master-daily-vnfs
USER_NAME: onap-master-daily-vnfs-ci
IDENTIFIER: -onap
NETWORK_IDENTIFIER: NONE
onap_deploy:
branch: master
extra_parameters:
OOM_BRANCH: master
ONAP_REPOSITORY: nexus3.onap.org:10001
ONAP_FLAVOR: small
DOCKER_HUB_PROXY: docker.nexus.azure.onap.eu
ELASTIC_PROXY: docker.nexus.azure.onap.eu
K8S_GCR_PROXY: docker.nexus.azure.onap.eu
get_artifacts:
- name: vnf_project_deploy
limit_to:
- vars/user_cloud.yml: vars/user_cloud.yml
- name: infra_deploy:onap_daily_pod4_k8s_master
in_pipeline: false
limit_to:
- inventory/infra: inventory/infra
- name: config:onap_daily_pod4_k8s_master
in_pipeline: false
limit_to:
- vars/pdf.yml: vars/pdf.yml
- vars/idf.yml: vars/idf.yml
#- vars/ddf.yml: vars/ddf.yml
- name: config
limit_to:
- vars/vim.yml: vars/vim.yml
- vars/ssh_gateways.yml: vars/ssh_gateways.yml
- vars/vaulted_ssh_credentials.yml: vars/vaulted_ssh_credentials.yml
project: oom
onap_test:
project: xtesting-onap
branch: master
get_artifacts:
- name: infra_deploy:onap_daily_pod4_k8s_master
in_pipeline: false
limit_to:
- inventory/infra: inventory/infra
- name: config:onap_daily_pod4_k8s_master
in_pipeline: false
limit_to:
- vars/pdf.yml: vars/pdf.yml
- name: k8s_deploy:onap_daily_pod4_k8s_master
in_pipeline: false
limit_to:
- vars/kube-config: vars/kube-config
- name: onap_deploy
limit_to:
- vars/cluster.yml: vars/cluster.yml
- name: config
limit_to:
- vars/vim.yml: vars/vim.yml
- vars/ssh_gateways.yml: vars/ssh_gateways.yml
- vars/vaulted_ssh_credentials.yml: vars/vaulted_ssh_credentials.yml
extra_parameters:
DEPLOYMENT: oom
INFRA_DEPLOYMENT: kubespray
DEPLOYMENT_TYPE: full
DEPLOY_SCENARIO: onap-ftw
RANDOM_WAIT: True
TEST_RESULT_DB_URL: http://testresults.opnfv.org/onap/api/v1/results
We find back the notion of stages: config (retrieve information from VMs and k8s installation), vnf_project_deploy (create a tenant for the test), onap_deploy (ONAP deployment), onap_test (tests to be executed on ONAP linked to the gitlab project onap_test)
Here again you can modify some parameters. in the onap_deploy stage you can specify the branch of the oom_installer (here master) and the branch of OOM
3) Resources definition
In the k8s chain definition, a parameter infra was set to precise the resources. These resources are defined in the associated pdf and idf files that can be found in https://gitlab.com/Orange-OpenSource/lfn/ci_cd/chained-ci/-/tree/master/pod_config/config
If we consider for instances k8s-onap-master, the 2 files are:
Infrastructure Description File (IDF): https://gitlab.com/Orange-OpenSource/lfn/ci_cd/chained-ci/-/blob/master/pod_config/config/idf-k8s18-onap-master.yaml
Platform Description File (PDF): https://gitlab.com/Orange-OpenSource/lfn/ci_cd/chained-ci/-/blob/master/pod_config/config/k8s18-onap-master.yaml
the parameters defined here are used by the collection infra managers to create the VM and or deploying infrastructure and platform components.
4) Summary
If you want to create a new daily
clone chained-ci repo
git clone https://gitlab.com/Orange-OpenSource/lfn/ci_cd/chained-ci.git
add onap-daily-k8s-x and onap-daily-x in the inventory https://gitlab.com/Orange-OpenSource/lfn/ci_cd/chained-ci/-/blob/master/pod_inventory/inventory
copy/paste master host_vars
cd chained-ci/pod_inventory/host_vars
cp onap_daily_pod4_k8s_master.yml onap_daily_x_k8s.yml
cp onap_daily_pod4_master.yml onap_daily_x.yml
Edit and adapt the chains
e.g. change the OOM branch to x in onap_daily_x.yml, update the versions of kubernetes, helm,..according to OOM recommentations
if resources changes, create an idf and pdf file in chained-ci/pod_config/config/ (note if resources are unchanged you may just reuse existing ones or copy paste existing ones for clarity)
cp chained-ci/pod_config/config/idf-k8s18-onap-master.yaml chained-ci/pod_config/config/idf-k8s18-onap-x.yaml
cp chained-ci/pod_config/config/k8s18-onap-master.yaml chained-ci/pod_config/config/k8s18-onap-x.yaml
and then reference infra: k8s18-onap-x in onap_daily_x_k8s.yml
add a schedule in gitlab vi with TARGET=onap-daily-x