Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Gerrit

Config Jobs

login as default admin

create test repo

...


PostgreSQL

Default Helm Chart

https://github.com/helm/charts/tree/master/stable/postgresql

# 3 machines # obriensystems dev laptop # gerrit source server # gerrit2 replication server # on remote dev host - against gerrit git clone "ssh://admin@gerrit.ons.zone:29418/test" && scp -p -P 29418 admin@gerrit.ons.zone:hooks/commit-msg "test/.git/hooks/" cd test/ vi test.sh git add test.sh git commit -s --amend git review # getting merge conflict - needed to remove old commit id vi test.sh git add test.sh git rebase --continue git review # move to gerrit UI, +2 review, merge # on gerrit server ssh ubuntu@gerrit.ons.zone # tail the logs to the gerrit container # on dev laptop obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone gerrit plugin reload replication [2019-03-28 15:25:57,246] [SSH gerrit plugin reload replication (admin)] INFO com.google.gerrit.server.plugins.PluginLoader : Reloaded plugin replication, version v2.16.6 obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication list Remote: gerrit2 Url: admin@gerrit2.ons.zone:8080/${name}.git [2019-03-28 15:26:57,963] [WorkQueue-1] INFO com.google.gerrit.server.plugins.CleanupHandle : Cleaned plugin plugin_replication_190328_0446_6094540689096397413.jar # debug on ssh -p 29418 admin@gerrit.ons.zone gerrit logging set DEBUG
Code Block
Code Block
themeRDark
themeRDark
[remote "gerrit2"]
  url = admin@gerrit2.ons.zone:29418/${name}.git
  push = +refs/heads/*:refs/heads/*
  push = +refs/tags/*:refs/tags/*

Replication

https://gerrit.googlesource.com/plugins/replication/+doc/master/src/main/resources/Documentation/config.md

fixed in https://www.gerritcodereview.com/2.14.html

Replication Use Case - commit change

Make change on gerrit, merge, kick off replication job, view change on gerrit2

Midnight
ubuntu@ip-172-31-27-4:~$ sudo helm install --name pgstg stable/postgresql
NAME:   pgstg
LAST DEPLOYED: Sat Apr 27 21:51:58 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME              TYPE    DATA  AGE
pgstg-postgresql  Opaque  1     0s
==> v1/Service
NAME                       TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE
pgstg-postgresql-headless  ClusterIP  None           <none>       5432/TCP  0s
pgstg-postgresql           ClusterIP  10.43.163.107  <none>       5432/TCP  0s
==> v1beta2/StatefulSet
NAME              DESIRED  CURRENT  AGE
pgstg-postgresql  1        1        0s
==> v1/Pod(related)
NAME                READY  STATUS   RESTARTS  AGE
pgstg-postgresql-0  0/1    Pending  0         0s


#NOTES:
debug** offPlease sshbe -ppatient 29418 admin@gerrit.ons.zone gerrit logging set reset
ssh -p 29418 admin@gerrit.ons.zone replication start --wait --all 
# nothing yet - debugging container I only see a recent
var/gerrit/data/replication/ref-updates
-rw-r--r-- 1 gerrit gerrit   45 Mar 28 15:25 9cbb43eb3ce03badc8b3c7dc52ef84d8d6e67066
bash-4.2$ cat 9cbb43eb3ce03badc8b3c7dc52ef84d8d6e67066 
{"project":"test","ref":"refs/heads/master"}


Issue was the key - after changing the url to 
url = admin@gerrit2.ons.zone:29418/${name}.git
I can ssh directly from gerrit to gerrit2 but they key is n/a for the container yet
sshd_log
[2019-03-28 15:57:50,164 +0000] b2bd0870 admin a/1000000 replication.start.--all 3ms 1ms 0
replication_log
[2019-03-28 17:34:07,816] [72da30d3] Replication to admin@gerrit2.ons.zone:29418/All-Users.git started...
[2019-03-28 17:34:07,834] [72da30d3] Cannot replicate to admin@gerrit2.ons.zone:29418/All-Users.git
org.eclipse.jgit.errors.TransportException: admin@gerrit2.ons.zone:29418/All-Users.git: reject HostKey: gerrit2.ons.zone
	at org.eclipse.jgit.transport.JschConfigSessionFactory.getSession(JschConfigSessionFactory.java:192)



...

while the chart is being deployed **
PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:
    pgstg-postgresql.default.svc.cluster.local - Read/Write connection
To get the password for "postgres" run:
    export POSTGRES_PASSWORD=$(kubectl get secret --namespace default pgstg-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
    kubectl run pgstg-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:10.7.0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host pgstg-postgresql -U postgres


To connect to your database from outside the cluster execute the following commands:
    kubectl port-forward --namespace default svc/pgstg-postgresql 5432:5432 &
    PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres

describe pod shows
  Warning  FailedScheduling  21s (x6 over 3m42s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims
Workaround:
Modify underlying yaml files to use a persistent volume with ReadWriteMany access



K8s only

it clone https://github.com/

...

the known_hosts file encoding is the issue - needs to be ssh-rsa not ecdsa-sha2-nistp256 which jgit is unhappy with.

Code Block
themeRDark
helm/charts.git

sudo helm install postgresql --name pg

ubuntu@ip-172-31-27-4:~/charts/stable$ helm delete pg
release "pg" deleted
ubuntu@ip-172-31-1527-1764:~$ cat ~/.ssh/known_hosts 
|1|RFSqL1D1fCROw=|fcc8BqvMOekw0RLOz7Ts= ecdsa-sha2-nistp256 AAAAE...akI=


fixcharts/stable$ kubectl get pv --all-namespaces
No resources found.
ubuntu@ip-172-31-1527-176:~$ ssh -v ubuntu@gerrit2.ons.zone 2>&1 | grep ~/.ssh/known_hosts 

debug1: Found key in /home/ubuntu/.ssh/known_hosts:2
ubuntu@ip-172-31-15-176:~$ sudo vi ~/.ssh/config
Host gerrit2.ons.zone4:~/charts/stable$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                              IdentityFile ~/.ssh/onap_rsaSTATUS    toVOLUME Host remote-alias gerrit4.ons.zone CAPACITY   IdentityFile ~/.ssh/onap_rsaACCESS MODES   HostnameSTORAGECLASS gerrit4.ons.zone  AGE
Protocoldefault 2   HostKeyAlgorithms sshdata-rsa,ssh-dss

Replication Use Case - new Repo

This should replicate to the slave according to https://gerrit-review.googlesource.com/c/plugins/replication/+/49728/5/src/main/resources/Documentation/config.md

via createMissingRepositories which is default true

Code Block
themeRDark
# action create in gui new
http://gerrit.ons.zone:8080/admin/repos/test2


# in container on gerrit1
bash-4.2$ ls -la /var/gerrit/data/replication/ref-updates/
-rw-r--r-- 1 gerrit gerrit   46 Mar 28 15:45 608db0817a4694dc10ee1e0811c2f76b27d3d03f
bash-4.2$ cat 608db0817a4694dc10ee1e0811c2f76b27d3d03f 
{"project":"test2","ref":"refs/heads/master"}

Helm Charts

Or get the yamls via 

https://gerrit.googlesource.com/k8s-gerrit/

not

https://github.com/helm/charts/tree/master/stable

Triage

following https://gerrit.googlesource.com/k8s-gerrit/+/master/helm-charts/gerrit-master/

https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner

https://github.com/helm/charts/blob/master/stable/nfs-server-provisioner/values.yaml

(look at https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client)

Code Block
themeMidnight
on vm2 in ~/google

sudo cp gerrit-master/values.yaml .
sudo vi values.yaml 
# added hostname, pub key, cert

sudo helm install ./gerrit-master -n gerrit-master -f values.yaml 
NAME:   gerrit-master
LAST DEPLOYED: Wed Mar 20 19:03:40 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME                                       TYPE    DATA  AGE
gerrit-master-gerrit-master-secure-config  Opaque  1   pg-postgresql-0              Pending                                                     4m48s
default     data-pgstg-postgresql-0           Pending                                                     14h
default     data-wishful-skunk-postgresql-0   Pending                                                     13m

ubuntu@ip-172-31-27-4:~/charts/stable$ vi pg-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-serv-prov-nfs-server-provisioner-0
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /srv/volumes/nfs-serv-prov-nfs-server-provisioner-0
  claimRef:
    namespace: kube-system
    name: nfs-serv-prov-nfs-server-provisioner-0


ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl apply -f pg-pv.yaml 
persistentvolume/nfs-serv-prov-nfs-server-provisioner-0 created


ubuntu@ip-172-31-27-4:~/charts/stable$ helm delete --purge pg
release "pg" deleted

sudo helm install postgresql --name pg

ubuntu@ip-172-31-27-4:~/charts/stable$ helm delete --purge pg
release "pg" deleted
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     data-pg-postgresql-0   Pending                                                     7m23s
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl delete pvc data-pg-postgresql-0
persistentvolumeclaim "data-pg-postgresql-0" deleted


change storage-class from - to nfs-provisioner
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS      AGE
default     data-pg-postgresql-0   Pending                                      nfs-provisioner   7s


follow
https://severalnines.com/blog/using-kubernetes-deploy-postgresql


ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl create -f postgres-storage.yaml 
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pv --all-namespaces
NAME                                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                STORAGECLASS   REASON   AGE
nfs-serv-prov-nfs-server-provisioner-0   200Gi      RWO            Retain           Available   kube-system/nfs-serv-prov-nfs-server-provisioner-0                           10m
postgres-pv-volume                       5Gi        RWX            Retain           Bound       default/postgres-pv-claim                            manual                  23s
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                STATUS   VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     postgres-pv-claim   Bound    postgres-pv-volume   5Gi        RWX            manual         32s




ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl create -f postgres-deployment.yaml 
deployment.extensions/postgres created


ubuntu@ip-172-31-27-4:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
default         nfs-serv-prov-nfs-server-provisioner-0    1/1     Running     0          26m
default         postgres-78f78bfbfc-pw4zp                 1/1     Running     0          22s


ubuntu@ip-172-31-27-4:~/charts/stable$ vi postgres-service.yaml
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl create -f postgres-service.yaml 
service/postgres created


ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get svc postgres
NAME       TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
postgres   NodePort   10.43.57.215   <none>        5432:30170/TCP   25s


ubuntu@ip-172-31-27-4:~/charts/stable$ sudo apt install postgresql-client-common
ubuntu@ip-172-31-27-4:~/charts/stable$ sudo apt-get install postgresql-client

ubuntu@ip-172-31-27-4:~/charts/stable$ psql -h localhost -U postgresadmin --password -p 30170 postgresdb
Password for user postgresadmin: 
psql (10.7 (Ubuntu 10.7-0ubuntu0.18.04.1), server 10.4 (Debian 10.4-2.pgdg90+1))
Type "help" for help.


postgresdb-# \l
                                 List of databases
    Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
------------+----------+----------+------------+------------+-----------------------
 postgres   | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 postgresdb | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
            |          |          |            |            | postgres=CTc/postgres
 template1  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
            |          |          |            |            | postgres=CTc/postgres
(4 rows)


dump
ubuntu@ip-172-31-27-4:~/charts/stable$ pg_dump -h localhost -U postgresadmin -p 30170 -W -F t postgresdb
Password: 
woc.dat0000600 0004000 0002000 00000003034 13461323457 0014447 0ustar00postgrespostgres0000000 0000000 PGDMP
postgresdb10.4 (Debian 10.4-2.pgdg90+1)#10.7 (Ubuntu 10.7-0ubuntu0.18.04.1)
                                                                          0ENCODINENCODINGSET client_encoding = 'UTF8';
false
     0
STDSTRINGS
STDSTRINGS(SET standard_conforming_strings = 'on';
false
     00
SEARCHPATH
SEARCHPATH8SELECT pg_catalog.set_config('search_path', '', false);
false
     126216384
postgresdDATABASEzCREATE DATABASE postgresdb WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.utf8' LC_CTYPE = 'en_US.utf8';
DROP DATABASE postgresdb;
postgresfalse26152200publicSCHEMACREATE SCHEMA public;
DROP SCHEMA public;
postgresfalse
SCHEMA publicCOMMENT6COMMENT ON SCHEMA public IS 'standard public schema';
postgresfalse3307912980plpgsql	EXTENSION?CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;
DROP EXTENSION plpgsql;
false
     00EXTENSION plpgsqlCOMMENT@COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
false1restore.sql0000600 0004000 0002000 00000002333 13461323457 0015375 0ustar00postgrespostgres0000000 0000000 --
-- NOTE:
--
-- File paths need to be edited. Search for $$PATH$$ and
-- replace it with the path to the directory containing
-- the extracted data files.
--
--
-- PostgreSQL database dump
--

-- Dumped from database version 10.4 (Debian 10.4-2.pgdg90+1)
-- Dumped by pg_dump version 10.7 (Ubuntu 10.7-0ubuntu0.18.04.1)

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;

DROP EXTENSION plpgsql;
DROP SCHEMA public;
--
-- Name: public; Type: SCHEMA; Schema: -; Owner: postgres
--

CREATE SCHEMA public;


ALTER SCHEMA public OWNER TO postgres;

--
-- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres
--

COMMENT ON SCHEMA public IS 'standard public schema';


--
-- Name: plpgsql; Type: EXTENSION; Schema: -; Owner: 
--

CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;


--
-- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner: 
--

COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';


--
-- PostgreSQL database dump complete
--







Backup

Restore

Gerrit

Config Jobs

login as default admin

create test repo

Gerrit repo url

Code Block
themeRDark
verified
git clone ssh://admin@gerrit2.ons.zone:29418/test.git
remote: Counting objects: 2, done
git clone admin@gerrit2.ons.zone:29418/test.git
admin@gerrit2.ons.zone: Permission denied (publickey).

verified on gerrit3 to 2 host
ubuntu@ip-172-31-31-191:~$ git clone  ssh://admin@gerrit2.ons.zone:29418/replicate.git
Cloning into 'replicate'...

verified admin user
ubuntu@ip-172-31-31-191:~$ ssh -p 29418 admin@gerrit2.ons.zone
Warning: Permanently added '[gerrit2.ons.zone]:29418,[3.17.20.86]:29418' (RSA) to the list of known hosts.
  Hi Administrator, you have successfully connected over SSH.


gerrit source - /var/gerrit/etc/gerrit.config

Code Block
themeRDark
# adjust server name
[gerrit]
        #canonicalWebUrl = http://fcdbe931c71d
        canonicalWebUrl = http://gerrit2.ons.zone

[receive]
        #enableSignedPush = false
        enableSignedPush = truef


gerrit source - /var/gerrit/etc/replication.config

Code Block
themeRDark
# ssh version
[remote "gerrit2"]
  url = ssh://admin@gerrit2.ons.zone:29418/${name}.git
  push = +refs/heads/*:refs/heads/*
  push = +refs/tags/*:refs/tags/*


# http version
[gerrit]
  defaultForceUpdate = true
[remote "gerrit2"]
  url = http://admin:NobJjm7wlDFvAObPWo5ZwlnmQEwdt9fyBJlJdIE5WQ@gerrit2.ons.zone:8080/${name}.git
  mirror = true
  threads = 3
  push = +refs/heads/*:refs/heads/*
  push = +refs/changes/*:refs/changes/*
  push = +refs/tags/*:refs/tags/*

Host ~/.ssh/config

Code Block
themeMidnight
Host remote-alias gerrit2.ons.zone
  IdentityFile ~/.ssh/onap_rsa
  Hostname gerrit2.ons.zone
  Protocol 2
  HostKeyAlgorithms ssh-rsa,ssh-dss
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Adjust docker hostname

Compose

Code Block
themeRDark
services:
  gerrit:
    image: gerritcodereview/gerrit
    hostname: gerrit2.ons.zone


Kubernetes

Add ssh admin key


Verification

verify port 

 curl http://gerrit.ons.zone:8080/ssh_info

Replication

https://gerrit.googlesource.com/plugins/replication/+doc/master/src/main/resources/Documentation/config.md

fixed in https://www.gerritcodereview.com/2.14.html

Replication Use Case - commit change

Make change on gerrit, merge, kick off replication job, view change on gerrit2

Code Block
themeRDark
# 3 machines
# obriensystems dev laptop
# gerrit source server
# gerrit2 replication server
# on remote dev host - against gerrit
git clone "ssh://admin@gerrit.ons.zone:29418/test" && scp -p -P 29418 admin@gerrit.ons.zone:hooks/commit-msg "test/.git/hooks/"
cd test/
vi test.sh 
git add test.sh 
git commit -s --amend
git review
# getting merge conflict - needed to remove old commit id
vi test.sh 
git add test.sh 
git rebase --continue
git review

# move to gerrit UI, +2 review, merge
# on gerrit server
ssh ubuntu@gerrit.ons.zone
# tail the logs to the gerrit container

# on dev laptop
obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone gerrit plugin reload replication
[2019-03-28 15:25:57,246] [SSH gerrit plugin reload replication (admin)] INFO  com.google.gerrit.server.plugins.PluginLoader : Reloaded plugin replication, version v2.16.6
obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication list
Remote: gerrit2
Url: admin@gerrit2.ons.zone:8080/${name}.git

[2019-03-28 15:26:57,963] [WorkQueue-1] INFO  com.google.gerrit.server.plugins.CleanupHandle : Cleaned plugin plugin_replication_190328_0446_6094540689096397413.jar
# debug on
ssh -p 29418 admin@gerrit.ons.zone gerrit logging set DEBUG                                          
# debug off
ssh -p 29418 admin@gerrit.ons.zone gerrit logging set reset
ssh -p 29418 admin@gerrit.ons.zone replication start --wait --all 
# nothing yet - debugging container I only see a recent
var/gerrit/data/replication/ref-updates
-rw-r--r-- 1 gerrit gerrit   45 Mar 28 15:25 9cbb43eb3ce03badc8b3c7dc52ef84d8d6e67066
bash-4.2$ cat 9cbb43eb3ce03badc8b3c7dc52ef84d8d6e67066 
{"project":"test","ref":"refs/heads/master"}


Issue was the key - after changing the url to 
url = admin@gerrit2.ons.zone:29418/${name}.git
I can ssh directly from gerrit to gerrit2 but they key is n/a for the container yet
sshd_log
[2019-03-28 15:57:50,164 +0000] b2bd0870 admin a/1000000 replication.start.--all 3ms 1ms 0
replication_log
[2019-03-28 17:34:07,816] [72da30d3] Replication to admin@gerrit2.ons.zone:29418/All-Users.git started...
[2019-03-28 17:34:07,834] [72da30d3] Cannot replicate to admin@gerrit2.ons.zone:29418/All-Users.git
org.eclipse.jgit.errors.TransportException: admin@gerrit2.ons.zone:29418/All-Users.git: reject HostKey: gerrit2.ons.zone
	at org.eclipse.jgit.transport.JschConfigSessionFactory.getSession(JschConfigSessionFactory.java:192)


# I am running hashed
ubuntu@ip-172-31-15-176:~$ grep "HashKnownHosts" /etc/ssh/ssh_config
    HashKnownHosts yes


# tried - it may be my url
Url: ssh://admin@gerrit2.ons.zone:29418/${name}.git
from
Url: admin@gerrit2.ons.zone:29418/${name}.git

[2019-03-28 21:54:04,089] [] Canceled 3 replication events during shutdown
[2019-03-28 21:54:17,738] [] scheduling replication All-Projects:..all.. => ssh://admin@gerrit2.ons.zone:29418/All-Projects.git
[2019-03-28 21:54:17,750] [] scheduled All-Projects:..all.. => [283d568e] push ssh://admin@gerrit2.ons.zone:29418/All-Projects.git to run after 15s
[2019-03-28 21:54:17,750] [] scheduling replication All-Users:..all.. => ssh://admin@gerrit2.ons.zone:29418/All-Users.git
[2019-03-28 21:54:17,751] [] scheduled All-Users:..all.. => [684a6e1d] push ssh://admin@gerrit2.ons.zone:29418/All-Users.git to run after 15s
[2019-03-28 21:54:17,751] [] scheduling replication test:..all.. => ssh://admin@gerrit2.ons.zone:29418/test.git
[2019-03-28 21:54:17,751] [] scheduled test:..all.. => [a84066fe] push ssh://admin@gerrit2.ons.zone:29418/test.git to run after 15s
[2019-03-28 21:54:32,751] [283d568e] Replication to ssh://admin@gerrit2.ons.zone:29418/All-Projects.git started...
[2019-03-28 21:54:32,857] [283d568e] Cannot replicate to ssh://admin@gerrit2.ons.zone:29418/All-Projects.git
org.eclipse.jgit.errors.TransportException: ssh://admin@gerrit2.ons.zone:29418/All-Projects.git: reject HostKey: gerrit2.ons.zone



Hostname is generated by docker - overriding
using hostname not hostname.domainname pair - as gerrit will only pick up the prefix
services:
  gerrit:
    image: gerritcodereview/gerrit
    hostname: gerrit2.ons.zone

ubuntu@ip-172-31-6-115:~$ sudo docker exec -it ubuntu_gerrit_1 bash
bash-4.2$ hostname
gerrit2.ons.zone


#trying protocol 1 (insecure) instead of 2
/home/ubuntu/.ssh/config line 5: Bad protocol spec '1'.


Verifying via https://gerrit.googlesource.com/plugins/replication/+/master/src/main/resources/Documentation/config.md
PreferredAuthentications publickey




Working with Igor - trying http instead of ssh






p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Menlo; color: #ffffff; background-color: #132a65}
span.s1 {font-variant-ligatures: no-common-ligatures}



2019-03-29 19:54:23,740] [6737289e] Cannot replicate replicate; Remote repository error: http://admin@gerrit2.ons.zone:8080/replicate.git: replicate unavailable

[2019-03-29 19:54:23,741] [a71c401e] Replication to http://admin@gerrit2.ons.zone:8080/test.git started...

[2019-03-29 19:54:23,754] [e7c79889] Replication to http://admin@gerrit2.ons.zone:8080/All-Projects.git completed in 121ms, 15001ms delay, 0 retries

[2019-03-29 19:54:23,756] [273130a7] Replication to http://admin@gerrit2.ons.zone:8080/All-Users.git completed in 116ms, 15000ms delay, 0 retries

[2019-03-29 19:54:23,762] [a71c401e] Push to http://admin@gerrit2.ons.zone:8080/test.git references: [RemoteRefUpdate[remoteName=refs/heads/master, NOT_ATTEMPTED, (null)...e9efbebcb130387cfa65e6f3c47dea5d005f8bbe, srcRef=refs/heads/master, forceUpdate, message=null]]

[2019-03-29 19:54:24,093] [a71c401e] Failed replicate of refs/heads/master to http://admin@gerrit2.ons.zone:8080/test.git, reason: prohibited by Gerrit: not permitted: force update

[2019-03-29 19:54:24,094] [a71c401e] Replication to http://admin@gerrit2.ons.zone:8080/test.git completed in 352ms, 15103ms delay, 0 retries




changed signed to true








p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Menlo; color: #ffffff; background-color: #132a65}
span.s1 {font-variant-ligatures: no-common-ligatures}



[2019-03-29 20:11:48,466] [a722008c] Replication to http://admin@gerrit2.ons.zone:8080/test.git started...

[2019-03-29 20:11:48,467] [a722008c] Push to http://admin@gerrit2.ons.zone:8080/test.git references: [RemoteRefUpdate[remoteName=refs/changes/03/1003/meta, NOT_ATTEMPTED, (null)...a9f165b560889e937a10ac45f425c6d727a8fb78, srcRef=refs/changes/03/1003/meta, forceUpdate, message=null], RemoteRefUpdate[remoteName=refs/changes/03/1003/1, NOT_ATTEMPTED, (null)...477762111d6ad43984fd3ee908730267880469c2, srcRef=refs/changes/03/1003/1, forceUpdate, message=null]]

[2019-03-29 20:11:48,662] [a722008c] Failed replicate of refs/changes/03/1003/meta to http://admin@gerrit2.ons.zone:8080/test.git, reason: cannot combine normal pushes and magic pushes

[2019-03-29 20:11:48,662] [a722008c] Failed replicate of refs/changes/03/1003/1 to http://admin@gerrit2.ons.zone:8080/test.git, reason: cannot combine normal pushes and magic pushes

[2019-03-29 20:11:48,663] [a722008c] Replication to http://admin@gerrit2.ons.zone:8080/test.git completed in 197ms, 15000ms delay, 0 retries






review+2/committed




p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Menlo; color: #ffffff; background-color: #132a65}
span.s1 {font-variant-ligatures: no-common-ligatures}



[2019-03-29 20:15:12,674] [a7d4407f] Replication to http://admin@gerrit2.ons.zone:8080/test.git started...
[2019-03-29 20:15:12,676] [a7d4407f] Push to http://admin@gerrit2.ons.zone:8080/test.git references: [RemoteRefUpdate[remoteName=refs/changes/03/1003/meta, NOT_ATTEMPTED, (null)...60d083b4f4b0916a6dc0da694c52a5e7ff08a9b7, srcRef=refs/changes/03/1003/meta, forceUpdate, message=null], RemoteRefUpdate[remoteName=refs/heads/master, NOT_ATTEMPTED, (null)...477762111d6ad43984fd3ee908730267880469c2, srcRef=refs/heads/master, forceUpdate, message=null]]
[2019-03-29 20:15:12,862] [a7d4407f] Failed replicate of refs/changes/03/1003/meta to http://admin@gerrit2.ons.zone:8080/test.git, reason: NoteDb update requires -o notedb=allow
[2019-03-29 20:15:12,862] [a7d4407f] Failed replicate of refs/heads/master to http://admin@gerrit2.ons.zone:8080/test.git, reason: prohibited by Gerrit: not permitted: force update
[2019-03-29 20:15:12,863] [a7d4407f] Replication to http://admin@gerrit2.ons.zone:8080/test.git completed in 188ms, 15000ms delay, 0 retries




gerrit.defaultForceUpdate :	If true, the default push refspec will be set to use forced update to the remote when no refspec is given. By default, false.


[gerrit]
        defaultForceUpdate = true


still magic error - added above to target gerrit



https://groups.google.com/forum/#!msg/repo-discuss/m2E72F2oiuo/w-lWg0WUZYIJ




p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Menlo; color: #ffffff; background-color: #132a65}
span.s1 {font-variant-ligatures: no-common-ligatures}



[[A[2019-03-29 21:03:24,045] [] scheduling replication All-Projects:..all.. => http://admin@gerrit2.ons.zone:8080/All-Projects.git

[2019-03-29 21:03:24,068] [] scheduled All-Projects:..all.. => [71c8678f] push http://admin@gerrit2.ons.zone:8080/All-Projects.git to run after 15s

[2019-03-29 21:03:24,071] [] scheduling replication All-Users:..all.. => http://admin@gerrit2.ons.zone:8080/All-Users.git

[2019-03-29 21:03:24,072] [] scheduled All-Users:..all.. => [b1cedf9b] push http://admin@gerrit2.ons.zone:8080/All-Users.git to run after 15s

[2019-03-29 21:03:24,075] [] scheduling replication replicate:..all.. => http://admin@gerrit2.ons.zone:8080/replicate.git

[2019-03-29 21:03:24,076] [] scheduled replicate:..all.. => [f19bf7a7] push http://admin@gerrit2.ons.zone:8080/replicate.git to run after 15s

[2019-03-29 21:03:24,079] [] scheduling replication test:..all.. => http://admin@gerrit2.ons.zone:8080/test.git

[2019-03-29 21:03:24,080] [] scheduled test:..all.. => [3192ef91] push http://admin@gerrit2.ons.zone:8080/test.git to run after 15s

[2019-03-29 21:03:39,070] [71c8678f] Replication to http://admin@gerrit2.ons.zone:8080/All-Projects.git started...

[2019-03-29 21:03:39,072] [b1cedf9b] Replication to http://admin@gerrit2.ons.zone:8080/All-Users.git started...

[2019-03-29 21:03:39,076] [f19bf7a7] Replication to http://admin@gerrit2.ons.zone:8080/replicate.git started...

[2019-03-29 21:03:39,245] [71c8678f] Replication to http://admin@gerrit2.ons.zone:8080/All-Projects.git completed in 173ms, 15004ms delay, 0 retries

[2019-03-29 21:03:39,246] [3192ef91] Replication to http://admin@gerrit2.ons.zone:8080/test.git started...

[2019-03-29 21:03:39,245] [f19bf7a7] Push to http://admin@gerrit2.ons.zone:8080/replicate.git references: [RemoteRefUpdate[remoteName=refs/heads/master, NOT_ATTEMPTED, (null)...410858cdd130ee2d56700d199f4021c246c1d22b, srcRef=refs/heads/master, forceUpdate, message=null]]

[2019-03-29 21:03:39,245] [b1cedf9b] Replication to http://admin@gerrit2.ons.zone:8080/All-Users.git completed in 173ms, 15000ms delay, 0 retries

[2019-03-29 21:03:39,281] [3192ef91] Push to http://admin@gerrit2.ons.zone:8080/test.git references: [RemoteRefUpdate[remoteName=refs/changes/03/1003/1, NOT_ATTEMPTED, (null)...477762111d6ad43984fd3ee908730267880469c2, srcRef=refs/changes/03/1003/1, forceUpdate, message=null], RemoteRefUpdate[remoteName=refs/changes/03/1003/meta, NOT_ATTEMPTED, (null)...60d083b4f4b0916a6dc0da694c52a5e7ff08a9b7, srcRef=refs/changes/03/1003/meta, forceUpdate, message=null], RemoteRefUpdate[remoteName=refs/heads/master, NOT_ATTEMPTED, (null)...477762111d6ad43984fd3ee908730267880469c2, srcRef=refs/heads/master, forceUpdate, message=null]]

[2019-03-29 21:03:39,682] [3192ef91] Failed replicate of refs/changes/03/1003/1 to http://admin@gerrit2.ons.zone:8080/test.git, reason: cannot combine normal pushes and magic pushes

[2019-03-29 21:03:39,682] [3192ef91] Failed replicate of refs/changes/03/1003/meta to http://admin@gerrit2.ons.zone:8080/test.git, reason: cannot combine normal pushes and magic pushes

[2019-03-29 21:03:39,682] [3192ef91] Failed replicate of refs/heads/master to http://admin@gerrit2.ons.zone:8080/test.git, reason: cannot combine normal pushes and magic pushes

[2019-03-29 21:03:39,682] [3192ef91] Replication to http://admin@gerrit2.ons.zone:8080/test.git completed in 435ms, 15166ms delay, 0 retries

[2019-03-29 21:03:39,704] [f19bf7a7] Failed replicate of refs/heads/master to http://admin@gerrit2.ons.zone:8080/replicate.git, reason: prohibited by Gerrit: not permitted: force update

[2019-03-29 21:03:39,705] [f19bf7a7] Replication to http://admin@gerrit2.ons.zone:8080/replicate.git completed in 628ms, 15000ms delay, 0 retries

todo: need adminURL to create repos

JSCH - Java Secure Channel - issue with sha2 known_hosts entries 

A video of what is being fixed

View file
name20190329_gerrit_replication_start_with-jsch-issue.mp4
height250

View file
name20190412_gerrit3_replication_partial_zoom_0.mp4
height250

According to https://groups.google.com/forum/#!topic/repo-discuss/9PTfVG8vdAU for https://github.com/eclipse/jgit/blob/master/org.eclipse.jgit/src/org/eclipse/jgit/transport/JschConfigSessionFactory.java#L191

the known_hosts file encoding is the issue - needs to be ssh-rsa not ecdsa-sha2-nistp256 which jgit is unhappy with.

Code Block
themeRDark
ubuntu@ip-172-31-15-176:~$ cat ~/.ssh/known_hosts 
|1|RFSqL1D1fCROw=|fcc8BqvMOekw0RLOz7Ts= ecdsa-sha2-nistp256 AAAAE...akI=

fix
ubuntu@ip-172-31-15-176:~$ ssh -v ubuntu@gerrit2.ons.zone 2>&1 | grep ~/.ssh/known_hosts 

debug1: Found key in /home/ubuntu/.ssh/known_hosts:2
ubuntu@ip-172-31-15-176:~$ sudo vi ~/.ssh/config
Host gerrit2.ons.zone
    IdentityFile ~/.ssh/onap_rsa

to set the algorithm

Host remote-alias gerrit2.ons.zone
  IdentityFile ~/.ssh/onap_rsa
  Hostname gerrit2.ons.zone
  Protocol 2
  HostKeyAlgorithms ssh-rsa,ssh-dss


# however with the fix - we see the correct known_hosts format but still rejected
ssh -p 29418 admin@gerrit.ons.zone replication start --all
2019-03-28 20:21:22,239] [] scheduling replication All-Projects:..all.. => admin@gerrit2.ons.zone:29418/All-Projects.git
[2019-03-28 20:21:22,240] [] scheduled All-Projects:..all.. => [4e4e425c] push admin@gerrit2.ons.zone:29418/All-Projects.git to run after 15s
[2019-03-28 20:21:22,240] [] scheduling replication All-Users:..all.. => admin@gerrit2.ons.zone:29418/All-Users.git
[2019-03-28 20:21:22,241] [] scheduled All-Users:..all.. => [8e58ba23] push admin@gerrit2.ons.zone:29418/All-Users.git to run after 15s
[2019-03-28 20:21:22,241] [] scheduling replication test:..all.. => admin@gerrit2.ons.zone:29418/test.git
[2019-03-28 20:21:22,241] [] scheduled test:..all.. => (retry 1) [ae725e99] push admin@gerrit2.ons.zone:29418/test.git to run after 15s
[2019-03-28 20:21:31,880] [ae725e99] Replication to admin@gerrit2.ons.zone:29418/test.git started...
[2019-03-28 20:21:31,939] [ae725e99] Cannot replicate to admin@gerrit2.ons.zone:29418/test.git
org.eclipse.jgit.errors.TransportException: admin@gerrit2.ons.zone:29418/test.git: reject HostKey: gerrit2.ons.zone

	at org.eclipse.jgit.transport.JschConfigSessionFactory.getSession(JschConfigSessionFactory.java:192)

same as https://stackoverflow.com/questions/45462161/gerrit-replicating-to-gitolite-fails

Replication Use Case - new Repo

This should replicate to the slave according to https://gerrit-review.googlesource.com/c/plugins/replication/+/49728/5/src/main/resources/Documentation/config.md

via createMissingRepositories which is default true

Code Block
themeRDark
# action create in gui new
http://gerrit.ons.zone:8080/admin/repos/test2


# in container on gerrit1
bash-4.2$ ls -la /var/gerrit/data/replication/ref-updates/
-rw-r--r-- 1 gerrit gerrit   46 Mar 28 15:45 608db0817a4694dc10ee1e0811c2f76b27d3d03f
bash-4.2$ cat 608db0817a4694dc10ee1e0811c2f76b27d3d03f 
{"project":"test2","ref":"refs/heads/master"}


Helm Charts

Or get the yamls via 

https://gerrit.googlesource.com/k8s-gerrit/

not

https://github.com/helm/charts/tree/master/stable

Triage

following https://gerrit.googlesource.com/k8s-gerrit/+/master/helm-charts/gerrit-master/

https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner

https://github.com/helm/charts/blob/master/stable/nfs-server-provisioner/values.yaml

(look at https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client)

Code Block
themeMidnight
on vm2 in ~/google

sudo cp gerrit-master/values.yaml .
sudo vi values.yaml 
# added hostname, pub key, cert

sudo helm install ./gerrit-master -n gerrit-master -f values.yaml 
NAME:   gerrit-master
LAST DEPLOYED: Wed Mar 20 19:03:40 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME                                       TYPE    DATA  AGE
gerrit-master-gerrit-master-secure-config  Opaque  1     0s
==> v1/ConfigMap
NAME                                   DATA  AGE
gerrit-master-gerrit-master-configmap  2     0s
==> v1/PersistentVolumeClaim
NAME                                  STATUS   VOLUME          CAPACITY  ACCESS MODES  STORAGECLASS  AGE
gerrit-master-gerrit-master-logs-pvc  Pending  default         0s
gerrit-master-gerrit-master-db-pvc    Pending  default         0s
gerrit-master-git-gc-logs-pvc         Pending  default         0s
gerrit-master-git-filesystem-pvc      Pending  shared-storage  0s
==> v1/ConfigMapService
NAME                                 TYPE   DATA  AGE gerrit-master-gerrit-master-configmap  2CLUSTER-IP      0s
==> v1/PersistentVolumeClaim
NAMEEXTERNAL-IP  PORT(S)       AGE
gerrit-master-gerrit-master-service  NodePort  10.43.111.61  <none>       80:31329/TCP  0s
==> v1/Deployment
NAME          STATUS   VOLUME          CAPACITY  ACCESS MODES  STORAGECLASS  AGE gerrit-master-gerrit-master-logs-pvc  Pending  default DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  0sAGE
gerrit-master-gerrit-master-db-pvcdeployment  1  Pending  default    1     0s gerrit-master-git-gc-logs-pvc  1       Pending  default  0       0s gerrit-master-git-filesystem-pvc      Pending  shared-storage  0s
==> v1v1beta1/ServiceCronJob
NAME                  SCHEDULE      SUSPEND  ACTIVE  LAST SCHEDULE    TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)  AGE
gerrit-master-git-gc  0 6,18 * * *  False     AGE
gerrit-master-gerrit-master-service0   NodePort  10.43.111.61  <none>       80:31329/TCP  0s
==> v1v1beta1/DeploymentIngress
NAME                                 HOSTS        DESIRED  CURRENT  UP-TO-DATEADDRESS  AVAILABLEPORTS  AGE
gerrit-master-gerrit-master-deploymentingress  1s2.onap.info  80      1 0s
==> v1/Pod(related)
NAME     1           0          0s ==> v1beta1/CronJob NAME                  SCHEDULE      SUSPENDREADY  ACTIVESTATUS  LAST SCHEDULERESTARTS  AGE
gerrit-master-gerrit-master-deployment-git-gc7cb7f96767-xz45w  0/1    Pending  0 6,18 * * *  False   0s
0NOTES:
A Gerrit master has been  <none>         0s
==> v1beta1/Ingress
NAME    deployed.
==================================
Gerrit may be accessed under: s2.onap.info


kubectl get pvc --all-namespaces
NAMESPACE   NAME                             HOSTS      STATUS    VOLUME  ADDRESS  PORTSCAPACITY  AGE
gerrit-master-gerrit-master-ingress  s2.onap.info  80 ACCESS MODES   STORAGECLASS     AGE
default 0s ==> v1/Pod(related) NAME  gerrit-master-gerrit-master-db-pvc     Pending                                       default        READY  STATUS4m
default  RESTARTS  AGE gerrit-master-gerrit-master-deployment-7cb7f96767-xz45w  0/1logs-pvc   Pending              Pending  0         0s NOTES: A Gerrit master has been deployed. ================================== Gerrit may be accessed under: s2.onap.info default   kubectl get pvc --all-namespaces NAMESPACE   NAME4m
default     gerrit-master-git-filesystem-pvc       Pending                      STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS  shared-storage   AGE4m
default     gerrit-master-gerritgit-mastergc-dblogs-pvc          Pending                                       default          4m
default

kubectl describe pod gerrit-master-gerrit-master-deployment-logs7cb7f96767-pvcxz45w -n default
PendingEvents:
  Type     Reason            Age               From     default          Message
 4m default----     gerrit---master-git-filesystem-pvc       Pending     ----              ----               -------
  Warning  FailedScheduling  4s shared-storage(x17 over 2m) 4m default-scheduler  pod has  gerrit-master-git-gc-logs-pvcunbound PersistentVolumeClaims

# evidently missing nfs dirs
ubuntu@bell2:~/google$ sudo helm Pendinglist
NAME         	REVISION	UPDATED                 	STATUS  	CHART           default   	NAMESPACE
gerrit-master	1      4m 	Wed kubectlMar describe pod gerrit-master-20 19:03:40 2019	DEPLOYED	gerrit-master-deployment-7cb7f96767-xz45w -n default
Events:0.1.0	default  

Typeubuntu@bell2:~/google$ sudo helm delete  Reason   gerrit-master --purge
release "gerrit-master" deleted



# via https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner
sudo helm Age    install stable/nfs-server-provisioner --name nfs-server-prov
NAME:   nfs-server-prov
LAST DEPLOYED: Wed Mar 20 19:31:04 2019
FromNAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==>    v1/Pod(related)
NAME      Message   ----      ------            ----           READY  STATUS ----            RESTARTS  AGE
nfs-server-prov-nfs-server---provisioner-0  0/1   Warning ContainerCreating FailedScheduling 0 4s (x17 over 2m)  default-scheduler  pod has0s
unbound PersistentVolumeClaims

# evidently missing nfs dirs
ubuntu@bell2:~/google$ sudo helm list
NAME==> v1/StorageClass
NAME  PROVISIONER                 	REVISION	UPDATED                 	STATUS  	CHART       AGE
nfs   cluster.local/nfs-server-prov-nfs-server-provisioner  0s
	NAMESPACE
gerrit-master	1 ==> v1/ServiceAccount
NAME      	Wed Mar 20 19:03:40 2019	DEPLOYED	gerrit-master-0.1.0	default            ubuntu@bell2:~/google$ sudo helm delete gerrit-master --purge release "gerrit-master" deleted    # via https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner
sudo helm install stable/ SECRETS  AGE
nfs-server-provisioner prov--name nfs-server-prov
NAME:provisioner  1        0s
==> v1/ClusterRole
NAME      nfs-server-prov LAST DEPLOYED: Wed Mar 20 19:31:04 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME               AGE
nfs-server-prov-nfs-server-provisioner  0s
==> v1/ClusterRoleBinding
NAME                   READY  STATUS             RESTARTS  AGE
nfs-server-prov-nfs-server-provisioner-0  0/1    ContainerCreating  0         0s
==> v1/StorageClassService
NAME  PROVISIONER                                  TYPE         AGE
nfsCLUSTER-IP    cluster.local/nfs-server-prov-nfs-server-provisioner  0s
==> v1/ServiceAccount
NAME      EXTERNAL-IP  PORT(S)                               SECRETS  AGE
nfs-server-prov-nfs-server-provisioner  1ClusterIP  10.43.249.72  <none>       2049/TCP,20048/TCP,51413/TCP,51413/UDP  0s
==> v1v1beta2/ClusterRoleStatefulSet
NAME                                    DESIRED  CURRENT  AGE
nfs-server-prov-nfs-server-provisioner  0s1 ==> v1/ClusterRoleBinding NAME     1        0s
NOTES:
The NFS Provisioner service has now been installed.
A storage class           AGE
nfs-server-prov-nfs-server-provisioner  0s
==> v1/Service
NAME                                    TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)            named 'nfs' has now been created
and is available to provision dynamic volumes.
You can use this storageclass by creating a `PersistentVolumeClaim` with the
correct storageClassName attribute. For example:
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-dynamic-volume-claim
    spec:
      storageClassName: AGE
nfs-server-prov-nfs-server-provisioner"nfs"
 ClusterIP  10.43.249.72  <none> accessModes:
     2049/TCP,20048/TCP,51413/TCP,51413/UDP  0s ==> v1beta2/StatefulSet
NAME- ReadWriteOnce
      resources:
        requests:
          storage: 100Mi


default      DESIRED  CURRENT  AGE
nfs-server-prov-nfs-server-provisioner-0   1        /1       Running 0s NOTES: The NFS Provisioner0 service has now been installed. A storage class named 'nfs'1m
has
now
beenkubectl createddescribe and is available to provision dynamic volumes.
You can use this storageclass by creating a `PersistentVolumeClaim` with the
correct storageClassName attribute. For example:
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-dynamic-volume-claim
    spec:
      storageClassName: "nfs"
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi


default         nfs-server-prov-nfs-server-provisioner-0   1/1       Running     0pvc gerrit-master-gerrit-master-db-pvc 
Events:
  Warning  ProvisioningFailed  13s (x6 over 1m)  persistentvolume-controller  storageclass.storage.k8s.io "default" not found


# set creation to true under storageClass (default and shared)
create: true


# further
 ExternalProvisioning  3s (x2 over 18s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "nfs" or manually created by system administrator




# need to do a detailed dive into SC provisioners


# i have this unbound PVC - because I have not created the NFS share yet via the prov
default     gerrit-master-git-filesystem-pvc       Pending                   1m   kubectl describe pvc gerrit-master-gerrit-master-db-pvc  Events:   Warning  ProvisioningFailed  13s (x6 over 1m)  persistentvolumeshared-controllerstorage  storageclass.storage.k8s.io "default" not found


# set creation to true under storageClass (default and shared)
create: true


# further
 ExternalProvisioning  3s (x2 over 18s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "nfs" or manually created by system administrator




# need to do a detailed dive into SC provisioners


# i have this unbound PVC - because I have not created the NFS share yet via the prov
default     gerrit-master-git-filesystem-pvc       Pending 2m
# want this bound PVC+PV
inf         gerrit-var-gerrit-review-site      Bound     pvc-6d2c642b-c278-11e8-8679-f4034344e778   6Gi        RWX           nfs-sc   174d
pvc-6d2c642b-c278-11e8-8679-f4034344e778   6Gi        RWX           Delete          Bound     inf/gerrit-var-gerrit-review-site      nfs-sc                                        shared-storage   2m
# want this bound PVC+PV
inf         gerrit-var-gerrit-review-site      Bound     pvc-6d2c642b-c278-11e8-8679-f4034344e778   6Gi        RWX           nfs-sc   174d
pvc-6d2c642b-c278-11e8-8679-f4034344e778   6Gi        RWX           Delete          Bound     inf/gerrit-var-gerrit-review-site      nfs-sc             174d

Rest API

Code Block
themeRDark
curl -i -H "Accept: application/json" http://server:8080/config/server/info
curl -i -H "Accept: application/json" http://server:8080/config/server/version
# reload config
# don't use --digest and add /a for authenticated posts
curl --user admin:myWWv -X POST http://server:8080/a/config/server/reload


[2019-03-27 03:56:21,778] [HTTP-113] INFO  com.google.gerrit.server.config.GerritServerConfigReloader : Starting server configuration reload
[2019-03-27 03:56:21,781] [HTTP-113] INFO  com.google.gerrit.server.config.GerritServerConfigReloader : Server configuration reload completed succesfully

Jenkins

Nexus

GoCD

GitLab

Links

https://kubernetes.io/docs/concepts/storage/storage-classes/

Baseline Testing

Verify your environment by installing the default mysql chart

Code Block
languagebash
themeMidnight
ubuntu@ip-172-31-3-87:~$ sudo helm install --name mysqldb --set mysqlRootPassword=myrootpass,mysqlUser=myuser,mysqlPassword=mypass,mysqlDatqbase=mydb stable/mysq
NAME:   mysqldb
LAST DEPLOYED: Thu Mar 21 16:06:02 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME          DATA  AGE
mysqldb-test  1174d

Rest API

Code Block
themeRDark
curl -i -H "Accept: application/json" http://server:8080/config/server/info
curl -i -H "Accept: application/json" http://server:8080/config/server/version
# reload config
# don't use --digest and add /a for authenticated posts
curl --user admin:myWWv -X POST http://server:8080/a/config/server/reload


[2019-03-27 03:56:21,778] [HTTP-113] INFO  com.google.gerrit.server.config.GerritServerConfigReloader : Starting server configuration reload
[2019-03-27 03:56:21,781] [HTTP-113] INFO  com.google.gerrit.server.config.GerritServerConfigReloader : Server configuration reload completed succesfully


Jenkins

Nexus

GoCD

GitLab


Links

https://kubernetes.io/docs/concepts/storage/storage-classes/



Baseline Testing

Verify your environment by installing the default mysql chart

Code Block
languagebash
themeMidnight
ubuntu@ip-172-31-3-87:~$ sudo helm install --name mysqldb --set mysqlRootPassword=myrootpass,mysqlUser=myuser,mysqlPassword=mypass,mysqlDatqbase=mydb stable/mysq
NAME:   mysqldb
LAST DEPLOYED: Thu Mar 21 16:06:02 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME          DATA  AGE
mysqldb-test  1     0s
==> v1/PersistentVolumeClaim
NAME     STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mysqldb  Pending  0s
==> v1/Service
NAME     TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
mysqldb  ClusterIP  10.43.186.39  <none>       3306/TCP  0s
==> v1beta1/Deployment
NAME     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mysqldb  1        1        1           0          0s
==> v1/Pod(related)
NAME                     READY  STATUS   RESTARTS  AGE
mysqldb-979887bcf-4hf59  0/1    Pending  0         0s
==> v1/PersistentVolumeClaimSecret
NAME     STATUSTYPE   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS DATA  AGE
mysqldb  PendingOpaque  0s2 ==> v1/Service NAME  0s
NOTES:
MySQL TYPEcan be accessed via port 3306 on CLUSTER-IPthe following DNS name EXTERNAL-IPfrom within PORT(S)   AGE
mysqldb  ClusterIP  10.43.186.39  <none>       3306/TCP  0s
==> v1beta1/Deployment
NAME     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mysqldb  1        1        1           0          0s
==> v1/Pod(related)
NAME                     READY  STATUS   RESTARTS  AGE
mysqldb-979887bcf-4hf59  0/1    Pending  0         0s
==> v1/Secret
NAME     TYPE    DATA  AGE
mysqldb  Opaque  2     0s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysqldb.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysqldb -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysqldb -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysqldb 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
   

DevOps

Kubernetes Cluster Install

Follow RKE setup OOM RKE Kubernetes Deployment#Quickstart

Kubernetes Services

Namespaces

Create a specific namespace

your cluster:
mysqldb.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysqldb -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysqldb -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysqldb 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
   

DevOps

Kubernetes Cluster Install

Follow RKE setup OOM RKE Kubernetes Deployment#Quickstart

Kubernetes Services

Namespaces

Create a specific namespace

https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/

Code Block
themeMidnight
vi namespace-dev.json
{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "dev",
    "labels": {
      "name": "dev"
    }
  }
}
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f namespace-dev.json 
namespace/dev created
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get namespaces --show-labels
NAME            STATUS    AGE       LABELS
default         Active    5d        <none>
dev             Active    44s       name=dev

Contexts

Code Block
themeMidnight
ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config set-context dev --namespace=dev --cluster=local --user=local
Context "dev" created.
ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config use-context dev
Switched to context "dev".


Storage

Volumes

https://kubernetes.io/docs/concepts/storage/volumes/

hostPath

https://kubernetes.io/docs/tasks/administerconfigure-pod-cluster/namespaces-walkthrough/
container/configure-persistent-volume-storage/

Persistent Volumes

Code Block
themeMidnight
vi namespace-dev.json
{kind: PersistentVolume
apiVersion: v1
metadata:
  "kind"name: "Namespace",task-pv-volume
  "apiVersion"labels:
"v1",
  "metadata": {
    "name": "dev",
    "labels": {
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - "name": "dev"ReadWriteOnce
  hostPath:
  }   }
}path: "/home/ubuntu/tools-data1"

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f namespacehostpath-dev.json 
namespace/devvolume.yaml -n dev
persistentvolume/task-pv-volume created
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get namespacespv --show-labels
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON  AGE  AGE
task-pv-volume   5Gi LABELS default      RWO   Active    5d     Retain   <none> dev       Available      Active    44s   manual    name=dev

Contexts

Code Block
themeMidnight
ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config set-context dev --namespace=dev --cluster=local --user=local Context "dev" created. ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config use-context dev
Switched to context "dev".

Storage

Volumes

https://kubernetes.io/docs/concepts/storage/volumes/

...

 2m



Persistent Volume Claims

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

...

#create-a-persistentvolumeclaim

Code Block
themeMidnight
kind: PersistentVolumePersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-volume
  labels:
    type: local
claim
spec:
  storageClassName: manual
  capacityaccessModes:
    storage:- 5GiReadWriteOnce
  accessModesresources:
    -requests:
ReadWriteOnce   hostPath:     pathstorage: "/home/ubuntu/tools-data1"3Gi

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f hostpath-volumepvc.yaml -n dev
persistentvolumepersistentvolumeclaim/task-pv-volumeclaim created


# check bound status
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pv 
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM  CLAIM             STORAGECLASS   REASON    AGE
task-pv-volume   5Gi        RWO            Retain           Bound     dev/task-pv-claim   manual                   7m

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl Availableget pvc -n dev
NAME         manual   STATUS    VOLUME           CAPACITY 2m  ACCESS 

Persistent Volume Claims

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim

Code Block
themeMidnight
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: MODES   STORAGECLASS   AGE
task-pv-claim spec:  Bound storageClassName: manual   accessModes:
    - ReadWriteOncetask-pv-volume   5Gi resources:     requests:  RWO     storage: 3Gi

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f hostpath-pvc.yaml -n dev
persistentvolumeclaim/task-pv-claim created


# check bound status
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pv 
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM           manual         1m


vi pv-pod.yaml


kind: Pod
apiVersion: v1
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
       claimName: task-pv-claim
 STORAGECLASS containers:
 REASON   - AGEname: task-pv-volume   5Gicontainer
      image: nginx
      ports:
 RWO       - containerPort: 80
  Retain        name: "http-server"
 Bound     dev/task-pv-claimvolumeMounts:
  manual      - mountPath: "/usr/share/nginx/html"
          7mname: task-pv-storage

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create get pvc-f pv-pod.yaml -n dev
NAME pod/task-pv-pod created


ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pods -n dev
NAME  STATUS    VOLUME    READY     STATUS  CAPACITY   ACCESSRESTARTS MODES   STORAGECLASS   AGE
task-pv-claimpod   Bound1/1     task-pv-volume  Running 5Gi  0      RWO    53s


# test
ubuntu@ip-172-31-30-234:~$ vi  manual         1m


vi pv-pod.yaml


kind: Pod
apiVersion: v1
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
       claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginxtools-data1/index.html
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl exec -it task-pv-pod -n dev bash
root@task-pv-pod:/# apt-get update; 
apt-get install curl
root@task-pv-pod:/# curl localhost
hello world


Storage Classes

https://kubernetes.io/docs/concepts/storage/storage-classes/

Design Issues

DI 0: Raw Docker Gerrit Container for reference - default H2

https://gerrit.googlesource.com/docker-gerrit/

Code Block
themeMidnight

sudo docker run --name gerrit -ti -d -p 8080:8080 -p 29418:29418 gerritcodereview/gerrit
ubuntu@ip-172-31-15-176:~$ sudo docker ps
CONTAINER ID        IMAGE            ports:         -COMMAND containerPort: 80           name: "http-server"    CREATED   volumeMounts:         - mountPath: "/usr/share/nginx/html"STATUS            name: task-pv-storage 
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f pv-pod.yaml -n dev
pod/task-pv-pod created


ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pods -n dev
NAMEPORTS                            READY     STATUS    RESTARTS   AGE task-pv-pod   1/1  NAMES
83cfd4a6492e    Running   0 gerritcodereview/gerrit   "/bin/sh -c 'git con…"   53s3 minutes ago # test ubuntu@ip-172-31-30-234:~$ vi tools-data1/index.html ubuntu@ip-172-31-30-234:~/helm/book$ kubectl exec -it task-pv-pod -n dev bash
root@task-pv-pod:/# apt-get update; 
apt-get install curl
root@task-pv-pod:/# curl localhost
hello world

Storage Classes

https://kubernetes.io/docs/concepts/storage/storage-classes/

Design Issues

DI 0: Raw Docker Gerrit Container for reference - default H2

https://gerrit.googlesource.com/docker-gerrit/

Code Block
themeMidnight

sudo docker run --name gerrit -ti -d -p 8080:8080 -p 29418:29418 gerritcodereview/gerrit Up 3 minutes        0.0.0.0:8080->8080/tcp, 0.0.0.0:29418->29418/tcp   nifty_einstein
# check http://localhost:8080


#

Image Added

Code Block
themeRDark
# copy key
sudo scp ~/wse_onap/onap_rsa ubuntu@gerrit2.ons.zone:~/
ubuntu@ip-172-31-1531-176191:~$ sudo dockerchmod ps400 CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS                                              NAMES
83cfd4a6492e       onap_rsa
ubuntu@ip-172-31-31-191:~$ sudo cp onap_rsa ~/.ssh/
ubuntu@ip-172-31-31-191:~$ sudo chown ubuntu:ubuntu ~/.ssh/onap_rsa

# cat your key
ssh-keyscan -t rsa gerrit2.ons.zone
in the format gerrit2.ons.zone ssh-rsa key



# add ~/.ssh/config
Host remote-alias gerrit2.ons.zone
  IdentityFile ~/.ssh/onap_rsa
  Hostname gerrit2.ons.zone
  Protocol 2
  HostKeyAlgorithms ssh-rsa,ssh-dss
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null


# add pub key to gerrit

#create user, repo,pw
s0 admin
4zZvLiKKHWOvMBeRWZwUR5ls0SpPbgphEpyT1K3KLQ
gerrit
 eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA

s2
admin
myWWvmVLQfEpIzhGtcXWHKqxtHsSr31DXM4VXmcy1g

s4

test clone using admin user
git clone "ssh://admin@gerrit3.ons.zone:29418/test" && scp -p -P 29418 admin@gerrit3.ons.zone:hooks/commit-msg "test/.git/hooks/"


Docker compose

The template on https://hub.docker.com/r/gerritcodereview/gerrit has its indentation off for services.gerrit.volumes

Code Block
themeRDark
sudo curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

docker-compose.yaml
version: '3'
services:
  gerrit:
    image: gerritcodereview/gerrit
  "/bin/sh -c 'gitvolumes:
con…"   3 minutes ago - git-volume:/var/gerrit/git
     Up 3 minutes - db-volume:/var/gerrit/db
      0.0.0.0:8080->8080/tcp, 0.0.0.0:29418->29418/tcp   nifty_einstein
# check http://localhost:8080


#

Image Removed

Code Block
themeRDark
# copy key
sudo scp ~/wse_onap/onap_rsa ubuntu@gerrit4.ons.zone:~/
ubuntu@ip-172-31-31-191:~$ sudo chmod 400 onap_rsa
ubuntu@ip-172-31-31-191:~$ sudo cp onap_rsa ~/.ssh/
ubuntu@ip-172-31-31-191:~$ sudo chown ubuntu:ubuntu ~/.ssh/onap_rsa

# add ~/.ssh/config
Host remote-alias gerrit2.ons.zone
  IdentityFile ~/.ssh/onap_rsa
  Hostname gerrit2.ons.zone
  Protocol 2
  HostKeyAlgorithms ssh-rsa,ssh-dss


# add pub key to gerrit

#create user, repo,pw
s0 admin
4zZvLiKKHWOvMBeRWZwUR5ls0SpPbgphEpyT1K3KLQ
gerrit
 eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA

s2
admin
myWWvmVLQfEpIzhGtcXWHKqxtHsSr31DXM4VXmcy1g

s4

test clone using admin user
git clone "ssh://admin@gerrit3.ons.zone:29418/test" && scp -p -P 29418 admin@gerrit3.ons.zone:hooks/commit-msg "test/.git/hooks/"- index-volume:/var/gerrit/index
     - cache-volume:/var/gerrit/cache
     # added
     - config-volume:/var/gerrit/etc
    ports:
     - "29418:29418"
     - "8080:8080"
volumes:
  git-volume:
  db-volume:
  index-volume:
  cache-volume:
  config-volume:

ubuntu@ip-172-31-31-191:~$ docker-compose up -d gerrit
Starting ubuntu_gerrit_1 ... done


todo: missing for replication.config
config-volume:/var/gerrit/etc

ubuntu@ip-172-31-31-191:~$ docker-compose up -d gerrit
Creating network "ubuntu_default" with the default driver
Creating volume "ubuntu_config-volume" with default driver
Creating ubuntu_gerrit_1 ... done


DI 1: Kubernetes Gerrit Deployment - no HELM

...