Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Gerrit

Replication

https://gerrit.googlesource.com/plugins/replication/+doc/master/src/main/resources/Documentation/config.md

Helm Charts

Or get the yamls via 

https://gerrit.googlesource.com/k8s-gerrit/

not


PostgreSQL

Default Helm Chart

https://github.com/helm/charts/tree/master/stable

Triage

following https://gerrit.googlesource.com/k8s-gerrit/+/master/helm-charts/gerrit-master/

https:///postgresql

Code Block
themeMidnight
ubuntu@ip-172-31-27-4:~$ sudo helm install --name pgstg stable/postgresql
NAME:   pgstg
LAST DEPLOYED: Sat Apr 27 21:51:58 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME              TYPE    DATA  AGE
pgstg-postgresql  Opaque  1     0s
==> v1/Service
NAME                       TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE
pgstg-postgresql-headless  ClusterIP  None           <none>       5432/TCP  0s
pgstg-postgresql           ClusterIP  10.43.163.107  <none>       5432/TCP  0s
==> v1beta2/StatefulSet
NAME              DESIRED  CURRENT  AGE
pgstg-postgresql  1        1        0s
==> v1/Pod(related)
NAME                READY  STATUS   RESTARTS  AGE
pgstg-postgresql-0  0/1    Pending  0         0s


NOTES:
** Please be patient while the chart is being deployed **
PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:
    pgstg-postgresql.default.svc.cluster.local - Read/Write connection
To get the password for "postgres" run:
    export POSTGRES_PASSWORD=$(kubectl get secret --namespace default pgstg-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
    kubectl run pgstg-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:10.7.0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host pgstg-postgresql -U postgres


To connect to your database from outside the cluster execute the following commands:
    kubectl port-forward --namespace default svc/pgstg-postgresql 5432:5432 &
    PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres

describe pod shows
  Warning  FailedScheduling  21s (x6 over 3m42s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims
Workaround:
Modify underlying yaml files to use a persistent volume with ReadWriteMany access



K8s only

it clone https://github.com/helm/charts.git

sudo helm install postgresql --name pg

ubuntu@ip-172-31-27-4:~/charts/stable$ helm delete pg
release "pg" deleted
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pv --all-namespaces
No resources found.
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                              STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     data-pg-postgresql-0              Pending                                                     4m48s
default     data-pgstg-postgresql-0           Pending                                                     14h
default     data-wishful-skunk-postgresql-0   Pending                                                     13m

ubuntu@ip-172-31-27-4:~/charts/stable$ vi pg-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-serv-prov-nfs-server-provisioner-0
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /srv/volumes/nfs-serv-prov-nfs-server-provisioner-0
  claimRef:
    namespace: kube-system
    name: nfs-serv-prov-nfs-server-provisioner-0


ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl apply -f pg-pv.yaml 
persistentvolume/nfs-serv-prov-nfs-server-provisioner-0 created


ubuntu@ip-172-31-27-4:~/charts/stable$ helm delete --purge pg
release "pg" deleted

sudo helm install postgresql --name pg

ubuntu@ip-172-31-27-4:~/charts/stable$ helm delete --purge pg
release "pg" deleted
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     data-pg-postgresql-0   Pending                                                     7m23s
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl delete pvc data-pg-postgresql-0
persistentvolumeclaim "data-pg-postgresql-0" deleted


change storage-class from - to nfs-provisioner
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS      AGE
default     data-pg-postgresql-0   Pending                                      nfs-provisioner   7s


follow
https://severalnines.com/blog/using-kubernetes-deploy-postgresql


ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl create -f postgres-storage.yaml 
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pv --all-namespaces
NAME                                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                STORAGECLASS   REASON   AGE
nfs-serv-prov-nfs-server-provisioner-0   200Gi      RWO            Retain           Available   kube-system/nfs-serv-prov-nfs-server-provisioner-0                           10m
postgres-pv-volume                       5Gi        RWX            Retain           Bound       default/postgres-pv-claim                            manual                  23s
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get pvc --all-namespaces
NAMESPACE   NAME                STATUS   VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     postgres-pv-claim   Bound    postgres-pv-volume   5Gi        RWX            manual         32s




ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl create -f postgres-deployment.yaml 
deployment.extensions/postgres created


ubuntu@ip-172-31-27-4:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
default         nfs-serv-prov-nfs-server-provisioner-0    1/1     Running     0          26m
default         postgres-78f78bfbfc-pw4zp                 1/1     Running     0          22s


ubuntu@ip-172-31-27-4:~/charts/stable$ vi postgres-service.yaml
ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl create -f postgres-service.yaml 
service/postgres created


ubuntu@ip-172-31-27-4:~/charts/stable$ kubectl get svc postgres
NAME       TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
postgres   NodePort   10.43.57.215   <none>        5432:30170/TCP   25s


ubuntu@ip-172-31-27-4:~/charts/stable$ sudo apt install postgresql-client-common
ubuntu@ip-172-31-27-4:~/charts/stable$ sudo apt-get install postgresql-client

ubuntu@ip-172-31-27-4:~/charts/stable$ psql -h localhost -U postgresadmin --password -p 30170 postgresdb
Password for user postgresadmin: 
psql (10.7 (Ubuntu 10.7-0ubuntu0.18.04.1), server 10.4 (Debian 10.4-2.pgdg90+1))
Type "help" for help.


postgresdb-# \l
                                 List of databases
    Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
------------+----------+----------+------------+------------+-----------------------
 postgres   | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 postgresdb | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
            |          |          |            |            | postgres=CTc/postgres
 template1  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
            |          |          |            |            | postgres=CTc/postgres
(4 rows)


dump
ubuntu@ip-172-31-27-4:~/charts/stable$ pg_dump -h localhost -U postgresadmin -p 30170 -W -F t postgresdb
Password: 
woc.dat0000600 0004000 0002000 00000003034 13461323457 0014447 0ustar00postgrespostgres0000000 0000000 PGDMP
postgresdb10.4 (Debian 10.4-2.pgdg90+1)#10.7 (Ubuntu 10.7-0ubuntu0.18.04.1)
                                                                          0ENCODINENCODINGSET client_encoding = 'UTF8';
false
     0
STDSTRINGS
STDSTRINGS(SET standard_conforming_strings = 'on';
false
     00
SEARCHPATH
SEARCHPATH8SELECT pg_catalog.set_config('search_path', '', false);
false
     126216384
postgresdDATABASEzCREATE DATABASE postgresdb WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.utf8' LC_CTYPE = 'en_US.utf8';
DROP DATABASE postgresdb;
postgresfalse26152200publicSCHEMACREATE SCHEMA public;
DROP SCHEMA public;
postgresfalse
SCHEMA publicCOMMENT6COMMENT ON SCHEMA public IS 'standard public schema';
postgresfalse3307912980plpgsql	EXTENSION?CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;
DROP EXTENSION plpgsql;
false
     00EXTENSION plpgsqlCOMMENT@COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
false1restore.sql0000600 0004000 0002000 00000002333 13461323457 0015375 0ustar00postgrespostgres0000000 0000000 --
-- NOTE:
--
-- File paths need to be edited. Search for $$PATH$$ and
-- replace it with the path to the directory containing
-- the extracted data files.
--
--
-- PostgreSQL database dump
--

-- Dumped from database version 10.4 (Debian 10.4-2.pgdg90+1)
-- Dumped by pg_dump version 10.7 (Ubuntu 10.7-0ubuntu0.18.04.1)

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;

DROP EXTENSION plpgsql;
DROP SCHEMA public;
--
-- Name: public; Type: SCHEMA; Schema: -; Owner: postgres
--

CREATE SCHEMA public;


ALTER SCHEMA public OWNER TO postgres;

--
-- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres
--

COMMENT ON SCHEMA public IS 'standard public schema';


--
-- Name: plpgsql; Type: EXTENSION; Schema: -; Owner: 
--

CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;


--
-- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner: 
--

COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';


--
-- PostgreSQL database dump complete
--







Backup

Restore

Gerrit

Config Jobs

login as default admin

create test repo

Gerrit repo url

Code Block
themeRDark
verified
git clone ssh://admin@gerrit2.ons.zone:29418/test.git
remote: Counting objects: 2, done
git clone admin@gerrit2.ons.zone:29418/test.git
admin@gerrit2.ons.zone: Permission denied (publickey).

verified on gerrit3 to 2 host
ubuntu@ip-172-31-31-191:~$ git clone  ssh://admin@gerrit2.ons.zone:29418/replicate.git
Cloning into 'replicate'...

verified admin user
ubuntu@ip-172-31-31-191:~$ ssh -p 29418 admin@gerrit2.ons.zone
Warning: Permanently added '[gerrit2.ons.zone]:29418,[3.17.20.86]:29418' (RSA) to the list of known hosts.
  Hi Administrator, you have successfully connected over SSH.


gerrit source - /var/gerrit/etc/gerrit.config

Code Block
themeRDark
# adjust server name
[gerrit]
        #canonicalWebUrl = http://fcdbe931c71d
        canonicalWebUrl = http://gerrit2.ons.zone

[receive]
        #enableSignedPush = false
        enableSignedPush = truef


gerrit source - /var/gerrit/etc/replication.config

Code Block
themeRDark
# ssh version
[remote "gerrit2"]
  url = ssh://admin@gerrit2.ons.zone:29418/${name}.git
  push = +refs/heads/*:refs/heads/*
  push = +refs/tags/*:refs/tags/*


# http version
[gerrit]
  defaultForceUpdate = true
[remote "gerrit2"]
  url = http://admin:NobJjm7wlDFvAObPWo5ZwlnmQEwdt9fyBJlJdIE5WQ@gerrit2.ons.zone:8080/${name}.git
  mirror = true
  threads = 3
  push = +refs/heads/*:refs/heads/*
  push = +refs/changes/*:refs/changes/*
  push = +refs/tags/*:refs/tags/*

Host ~/.ssh/config

Code Block
themeMidnight
Host remote-alias gerrit2.ons.zone
  IdentityFile ~/.ssh/onap_rsa
  Hostname gerrit2.ons.zone
  Protocol 2
  HostKeyAlgorithms ssh-rsa,ssh-dss
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Adjust docker hostname

Compose

Code Block
themeRDark
services:
  gerrit:
    image: gerritcodereview/gerrit
    hostname: gerrit2.ons.zone


Kubernetes

Add ssh admin key


Verification

verify port 

 curl http://gerrit.ons.zone:8080/ssh_info

Replication

https://gerrit.googlesource.com/plugins/replication/+doc/master/src/main/resources/Documentation/config.md

fixed in https://www.gerritcodereview.com/2.14.html

Replication Use Case - commit change

Make change on gerrit, merge, kick off replication job, view change on gerrit2

Code Block
themeRDark
# 3 machines
# obriensystems dev laptop
# gerrit source server
# gerrit2 replication server
# on remote dev host - against gerrit
git clone "ssh://admin@gerrit.ons.zone:29418/test" && scp -p -P 29418 admin@gerrit.ons.zone:hooks/commit-msg "test/.git/hooks/"
cd test/
vi test.sh 
git add test.sh 
git commit -s --amend
git review
# getting merge conflict - needed to remove old commit id
vi test.sh 
git add test.sh 
git rebase --continue
git review

# move to gerrit UI, +2 review, merge
# on gerrit server
ssh ubuntu@gerrit.ons.zone
# tail the logs to the gerrit container

# on dev laptop
obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone gerrit plugin reload replication
[2019-03-28 15:25:57,246] [SSH gerrit plugin reload replication (admin)] INFO  com.google.gerrit.server.plugins.PluginLoader : Reloaded plugin replication, version v2.16.6
obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication list
Remote: gerrit2
Url: admin@gerrit2.ons.zone:8080/${name}.git

[2019-03-28 15:26:57,963] [WorkQueue-1] INFO  com.google.gerrit.server.plugins.CleanupHandle : Cleaned plugin plugin_replication_190328_0446_6094540689096397413.jar
# debug on
ssh -p 29418 admin@gerrit.ons.zone gerrit logging set DEBUG                                          
# debug off
ssh -p 29418 admin@gerrit.ons.zone gerrit logging set reset
ssh -p 29418 admin@gerrit.ons.zone replication start --wait --all 
# nothing yet - debugging container I only see a recent
var/gerrit/data/replication/ref-updates
-rw-r--r-- 1 gerrit gerrit   45 Mar 28 15:25 9cbb43eb3ce03badc8b3c7dc52ef84d8d6e67066
bash-4.2$ cat 9cbb43eb3ce03badc8b3c7dc52ef84d8d6e67066 
{"project":"test","ref":"refs/heads/master"}


Issue was the key - after changing the url to 
url = admin@gerrit2.ons.zone:29418/${name}.git
I can ssh directly from gerrit to gerrit2 but they key is n/a for the container yet
sshd_log
[2019-03-28 15:57:50,164 +0000] b2bd0870 admin a/1000000 replication.start.--all 3ms 1ms 0
replication_log
[2019-03-28 17:34:07,816] [72da30d3] Replication to admin@gerrit2.ons.zone:29418/All-Users.git started...
[2019-03-28 17:34:07,834] [72da30d3] Cannot replicate to admin@gerrit2.ons.zone:29418/All-Users.git
org.eclipse.jgit.errors.TransportException: admin@gerrit2.ons.zone:29418/All-Users.git: reject HostKey: gerrit2.ons.zone
	at org.eclipse.jgit.transport.JschConfigSessionFactory.getSession(JschConfigSessionFactory.java:192)


# I am running hashed
ubuntu@ip-172-31-15-176:~$ grep "HashKnownHosts" /etc/ssh/ssh_config
    HashKnownHosts yes


# tried - it may be my url
Url: ssh://admin@gerrit2.ons.zone:29418/${name}.git
from
Url: admin@gerrit2.ons.zone:29418/${name}.git

[2019-03-28 21:54:04,089] [] Canceled 3 replication events during shutdown
[2019-03-28 21:54:17,738] [] scheduling replication All-Projects:..all.. => ssh://admin@gerrit2.ons.zone:29418/All-Projects.git
[2019-03-28 21:54:17,750] [] scheduled All-Projects:..all.. => [283d568e] push ssh://admin@gerrit2.ons.zone:29418/All-Projects.git to run after 15s
[2019-03-28 21:54:17,750] [] scheduling replication All-Users:..all.. => ssh://admin@gerrit2.ons.zone:29418/All-Users.git
[2019-03-28 21:54:17,751] [] scheduled All-Users:..all.. => [684a6e1d] push ssh://admin@gerrit2.ons.zone:29418/All-Users.git to run after 15s
[2019-03-28 21:54:17,751] [] scheduling replication test:..all.. => ssh://admin@gerrit2.ons.zone:29418/test.git
[2019-03-28 21:54:17,751] [] scheduled test:..all.. => [a84066fe] push ssh://admin@gerrit2.ons.zone:29418/test.git to run after 15s
[2019-03-28 21:54:32,751] [283d568e] Replication to ssh://admin@gerrit2.ons.zone:29418/All-Projects.git started...
[2019-03-28 21:54:32,857] [283d568e] Cannot replicate to ssh://admin@gerrit2.ons.zone:29418/All-Projects.git
org.eclipse.jgit.errors.TransportException: ssh://admin@gerrit2.ons.zone:29418/All-Projects.git: reject HostKey: gerrit2.ons.zone



Hostname is generated by docker - overriding
using hostname not hostname.domainname pair - as gerrit will only pick up the prefix
services:
  gerrit:
    image: gerritcodereview/gerrit
    hostname: gerrit2.ons.zone

ubuntu@ip-172-31-6-115:~$ sudo docker exec -it ubuntu_gerrit_1 bash
bash-4.2$ hostname
gerrit2.ons.zone


#trying protocol 1 (insecure) instead of 2
/home/ubuntu/.ssh/config line 5: Bad protocol spec '1'.


Verifying via https://gerrit.googlesource.com/plugins/replication/+/master/src/main/resources/Documentation/config.md
PreferredAuthentications publickey




Working with Igor - trying http instead of ssh






p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Menlo; color: #ffffff; background-color: #132a65}
span.s1 {font-variant-ligatures: no-common-ligatures}



2019-03-29 19:54:23,740] [6737289e] Cannot replicate replicate; Remote repository error: http://admin@gerrit2.ons.zone:8080/replicate.git: replicate unavailable

[2019-03-29 19:54:23,741] [a71c401e] Replication to http://admin@gerrit2.ons.zone:8080/test.git started...

[2019-03-29 19:54:23,754] [e7c79889] Replication to http://admin@gerrit2.ons.zone:8080/All-Projects.git completed in 121ms, 15001ms delay, 0 retries

[2019-03-29 19:54:23,756] [273130a7] Replication to http://admin@gerrit2.ons.zone:8080/All-Users.git completed in 116ms, 15000ms delay, 0 retries

[2019-03-29 19:54:23,762] [a71c401e] Push to http://admin@gerrit2.ons.zone:8080/test.git references: [RemoteRefUpdate[remoteName=refs/heads/master, NOT_ATTEMPTED, (null)...e9efbebcb130387cfa65e6f3c47dea5d005f8bbe, srcRef=refs/heads/master, forceUpdate, message=null]]

[2019-03-29 19:54:24,093] [a71c401e] Failed replicate of refs/heads/master to http://admin@gerrit2.ons.zone:8080/test.git, reason: prohibited by Gerrit: not permitted: force update

[2019-03-29 19:54:24,094] [a71c401e] Replication to http://admin@gerrit2.ons.zone:8080/test.git completed in 352ms, 15103ms delay, 0 retries




changed signed to true








p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Menlo; color: #ffffff; background-color: #132a65}
span.s1 {font-variant-ligatures: no-common-ligatures}



[2019-03-29 20:11:48,466] [a722008c] Replication to http://admin@gerrit2.ons.zone:8080/test.git started...

[2019-03-29 20:11:48,467] [a722008c] Push to http://admin@gerrit2.ons.zone:8080/test.git references: [RemoteRefUpdate[remoteName=refs/changes/03/1003/meta, NOT_ATTEMPTED, (null)...a9f165b560889e937a10ac45f425c6d727a8fb78, srcRef=refs/changes/03/1003/meta, forceUpdate, message=null], RemoteRefUpdate[remoteName=refs/changes/03/1003/1, NOT_ATTEMPTED, (null)...477762111d6ad43984fd3ee908730267880469c2, srcRef=refs/changes/03/1003/1, forceUpdate, message=null]]

[2019-03-29 20:11:48,662] [a722008c] Failed replicate of refs/changes/03/1003/meta to http://admin@gerrit2.ons.zone:8080/test.git, reason: cannot combine normal pushes and magic pushes

[2019-03-29 20:11:48,662] [a722008c] Failed replicate of refs/changes/03/1003/1 to http://admin@gerrit2.ons.zone:8080/test.git, reason: cannot combine normal pushes and magic pushes

[2019-03-29 20:11:48,663] [a722008c] Replication to http://admin@gerrit2.ons.zone:8080/test.git completed in 197ms, 15000ms delay, 0 retries






review+2/committed




p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Menlo; color: #ffffff; background-color: #132a65}
span.s1 {font-variant-ligatures: no-common-ligatures}



[2019-03-29 20:15:12,674] [a7d4407f] Replication to http://admin@gerrit2.ons.zone:8080/test.git started...
[2019-03-29 20:15:12,676] [a7d4407f] Push to http://admin@gerrit2.ons.zone:8080/test.git references: [RemoteRefUpdate[remoteName=refs/changes/03/1003/meta, NOT_ATTEMPTED, (null)...60d083b4f4b0916a6dc0da694c52a5e7ff08a9b7, srcRef=refs/changes/03/1003/meta, forceUpdate, message=null], RemoteRefUpdate[remoteName=refs/heads/master, NOT_ATTEMPTED, (null)...477762111d6ad43984fd3ee908730267880469c2, srcRef=refs/heads/master, forceUpdate, message=null]]
[2019-03-29 20:15:12,862] [a7d4407f] Failed replicate of refs/changes/03/1003/meta to http://admin@gerrit2.ons.zone:8080/test.git, reason: NoteDb update requires -o notedb=allow
[2019-03-29 20:15:12,862] [a7d4407f] Failed replicate of refs/heads/master to http://admin@gerrit2.ons.zone:8080/test.git, reason: prohibited by Gerrit: not permitted: force update
[2019-03-29 20:15:12,863] [a7d4407f] Replication to http://admin@gerrit2.ons.zone:8080/test.git completed in 188ms, 15000ms delay, 0 retries




gerrit.defaultForceUpdate :	If true, the default push refspec will be set to use forced update to the remote when no refspec is given. By default, false.


[gerrit]
        defaultForceUpdate = true


still magic error - added above to target gerrit



https://groups.google.com/forum/#!msg/repo-discuss/m2E72F2oiuo/w-lWg0WUZYIJ




p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Menlo; color: #ffffff; background-color: #132a65}
span.s1 {font-variant-ligatures: no-common-ligatures}



[[A[2019-03-29 21:03:24,045] [] scheduling replication All-Projects:..all.. => http://admin@gerrit2.ons.zone:8080/All-Projects.git

[2019-03-29 21:03:24,068] [] scheduled All-Projects:..all.. => [71c8678f] push http://admin@gerrit2.ons.zone:8080/All-Projects.git to run after 15s

[2019-03-29 21:03:24,071] [] scheduling replication All-Users:..all.. => http://admin@gerrit2.ons.zone:8080/All-Users.git

[2019-03-29 21:03:24,072] [] scheduled All-Users:..all.. => [b1cedf9b] push http://admin@gerrit2.ons.zone:8080/All-Users.git to run after 15s

[2019-03-29 21:03:24,075] [] scheduling replication replicate:..all.. => http://admin@gerrit2.ons.zone:8080/replicate.git

[2019-03-29 21:03:24,076] [] scheduled replicate:..all.. => [f19bf7a7] push http://admin@gerrit2.ons.zone:8080/replicate.git to run after 15s

[2019-03-29 21:03:24,079] [] scheduling replication test:..all.. => http://admin@gerrit2.ons.zone:8080/test.git

[2019-03-29 21:03:24,080] [] scheduled test:..all.. => [3192ef91] push http://admin@gerrit2.ons.zone:8080/test.git to run after 15s

[2019-03-29 21:03:39,070] [71c8678f] Replication to http://admin@gerrit2.ons.zone:8080/All-Projects.git started...

[2019-03-29 21:03:39,072] [b1cedf9b] Replication to http://admin@gerrit2.ons.zone:8080/All-Users.git started...

[2019-03-29 21:03:39,076] [f19bf7a7] Replication to http://admin@gerrit2.ons.zone:8080/replicate.git started...

[2019-03-29 21:03:39,245] [71c8678f] Replication to http://admin@gerrit2.ons.zone:8080/All-Projects.git completed in 173ms, 15004ms delay, 0 retries

[2019-03-29 21:03:39,246] [3192ef91] Replication to http://admin@gerrit2.ons.zone:8080/test.git started...

[2019-03-29 21:03:39,245] [f19bf7a7] Push to http://admin@gerrit2.ons.zone:8080/replicate.git references: [RemoteRefUpdate[remoteName=refs/heads/master, NOT_ATTEMPTED, (null)...410858cdd130ee2d56700d199f4021c246c1d22b, srcRef=refs/heads/master, forceUpdate, message=null]]

[2019-03-29 21:03:39,245] [b1cedf9b] Replication to http://admin@gerrit2.ons.zone:8080/All-Users.git completed in 173ms, 15000ms delay, 0 retries

[2019-03-29 21:03:39,281] [3192ef91] Push to http://admin@gerrit2.ons.zone:8080/test.git references: [RemoteRefUpdate[remoteName=refs/changes/03/1003/1, NOT_ATTEMPTED, (null)...477762111d6ad43984fd3ee908730267880469c2, srcRef=refs/changes/03/1003/1, forceUpdate, message=null], RemoteRefUpdate[remoteName=refs/changes/03/1003/meta, NOT_ATTEMPTED, (null)...60d083b4f4b0916a6dc0da694c52a5e7ff08a9b7, srcRef=refs/changes/03/1003/meta, forceUpdate, message=null], RemoteRefUpdate[remoteName=refs/heads/master, NOT_ATTEMPTED, (null)...477762111d6ad43984fd3ee908730267880469c2, srcRef=refs/heads/master, forceUpdate, message=null]]

[2019-03-29 21:03:39,682] [3192ef91] Failed replicate of refs/changes/03/1003/1 to http://admin@gerrit2.ons.zone:8080/test.git, reason: cannot combine normal pushes and magic pushes

[2019-03-29 21:03:39,682] [3192ef91] Failed replicate of refs/changes/03/1003/meta to http://admin@gerrit2.ons.zone:8080/test.git, reason: cannot combine normal pushes and magic pushes

[2019-03-29 21:03:39,682] [3192ef91] Failed replicate of refs/heads/master to http://admin@gerrit2.ons.zone:8080/test.git, reason: cannot combine normal pushes and magic pushes

[2019-03-29 21:03:39,682] [3192ef91] Replication to http://admin@gerrit2.ons.zone:8080/test.git completed in 435ms, 15166ms delay, 0 retries

[2019-03-29 21:03:39,704] [f19bf7a7] Failed replicate of refs/heads/master to http://admin@gerrit2.ons.zone:8080/replicate.git, reason: prohibited by Gerrit: not permitted: force update

[2019-03-29 21:03:39,705] [f19bf7a7] Replication to http://admin@gerrit2.ons.zone:8080/replicate.git completed in 628ms, 15000ms delay, 0 retries

todo: need adminURL to create repos

JSCH - Java Secure Channel - issue with sha2 known_hosts entries 

A video of what is being fixed

View file
name20190329_gerrit_replication_start_with-jsch-issue.mp4
height250

View file
name20190412_gerrit3_replication_partial_zoom_0.mp4
height250

According to https://groups.google.com/forum/#!topic/repo-discuss/9PTfVG8vdAU for https://github.com/eclipse/jgit/blob/master/org.eclipse.jgit/src/org/eclipse/jgit/transport/JschConfigSessionFactory.java#L191

the known_hosts file encoding is the issue - needs to be ssh-rsa not ecdsa-sha2-nistp256 which jgit is unhappy with.

Code Block
themeRDark
ubuntu@ip-172-31-15-176:~$ cat ~/.ssh/known_hosts 
|1|RFSqL1D1fCROw=|fcc8BqvMOekw0RLOz7Ts= ecdsa-sha2-nistp256 AAAAE...akI=

fix
ubuntu@ip-172-31-15-176:~$ ssh -v ubuntu@gerrit2.ons.zone 2>&1 | grep ~/.ssh/known_hosts 

debug1: Found key in /home/ubuntu/.ssh/known_hosts:2
ubuntu@ip-172-31-15-176:~$ sudo vi ~/.ssh/config
Host gerrit2.ons.zone
    IdentityFile ~/.ssh/onap_rsa

to set the algorithm

Host remote-alias gerrit2.ons.zone
  IdentityFile ~/.ssh/onap_rsa
  Hostname gerrit2.ons.zone
  Protocol 2
  HostKeyAlgorithms ssh-rsa,ssh-dss


# however with the fix - we see the correct known_hosts format but still rejected
ssh -p 29418 admin@gerrit.ons.zone replication start --all
2019-03-28 20:21:22,239] [] scheduling replication All-Projects:..all.. => admin@gerrit2.ons.zone:29418/All-Projects.git
[2019-03-28 20:21:22,240] [] scheduled All-Projects:..all.. => [4e4e425c] push admin@gerrit2.ons.zone:29418/All-Projects.git to run after 15s
[2019-03-28 20:21:22,240] [] scheduling replication All-Users:..all.. => admin@gerrit2.ons.zone:29418/All-Users.git
[2019-03-28 20:21:22,241] [] scheduled All-Users:..all.. => [8e58ba23] push admin@gerrit2.ons.zone:29418/All-Users.git to run after 15s
[2019-03-28 20:21:22,241] [] scheduling replication test:..all.. => admin@gerrit2.ons.zone:29418/test.git
[2019-03-28 20:21:22,241] [] scheduled test:..all.. => (retry 1) [ae725e99] push admin@gerrit2.ons.zone:29418/test.git to run after 15s
[2019-03-28 20:21:31,880] [ae725e99] Replication to admin@gerrit2.ons.zone:29418/test.git started...
[2019-03-28 20:21:31,939] [ae725e99] Cannot replicate to admin@gerrit2.ons.zone:29418/test.git
org.eclipse.jgit.errors.TransportException: admin@gerrit2.ons.zone:29418/test.git: reject HostKey: gerrit2.ons.zone

	at org.eclipse.jgit.transport.JschConfigSessionFactory.getSession(JschConfigSessionFactory.java:192)

same as https://stackoverflow.com/questions/45462161/gerrit-replicating-to-gitolite-fails

Replication Use Case - new Repo

This should replicate to the slave according to https://gerrit-review.googlesource.com/c/plugins/replication/+/49728/5/src/main/resources/Documentation/config.md

via createMissingRepositories which is default true

Code Block
themeRDark
# action create in gui new
http://gerrit.ons.zone:8080/admin/repos/test2


# in container on gerrit1
bash-4.2$ ls -la /var/gerrit/data/replication/ref-updates/
-rw-r--r-- 1 gerrit gerrit   46 Mar 28 15:45 608db0817a4694dc10ee1e0811c2f76b27d3d03f
bash-4.2$ cat 608db0817a4694dc10ee1e0811c2f76b27d3d03f 
{"project":"test2","ref":"refs/heads/master"}


Helm Charts

Or get the yamls via 

https://gerrit.googlesource.com/k8s-gerrit/

not

https://github.com/helm/charts/tree/master/stable

Triage

following https://gerrit.googlesource.com/k8s-gerrit/+/master/helm-charts/gerrit-master/

https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner

https://github.com/helm/charts/blob/master/stable/nfs-server-provisioner/values.yaml

(look at https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client)

Code Block
themeMidnight
on vm2 in ~/google

sudo cp gerrit-master/values.yaml .
sudo vi values.yaml 
# added hostname, pub key, cert

sudo helm install ./gerrit-master -n gerrit-master -f values.yaml 
NAME:   gerrit-master
LAST DEPLOYED: Wed Mar 20 19:03:40 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME                                       TYPE    DATA  AGE
gerrit-master-gerrit-master-secure-config  Opaque  1     0s
==> v1/ConfigMap
NAME                                   DATA  AGE
gerrit-master-gerrit-master-configmap  2     0s
==> v1/PersistentVolumeClaim
NAME                                  STATUS   VOLUME          CAPACITY  ACCESS MODES  STORAGECLASS  AGE
gerrit-master-gerrit-master-logs-pvc  Pending  default         0s
gerrit-master-gerrit-master-db-pvc    Pending  default         0s
gerrit-master-git-gc-logs-pvc         Pending  default         0s
gerrit-master-git-filesystem-pvc      Pending  shared-storage  0s
==> v1/Service
NAME                                 TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)       AGE
gerrit-master-gerrit-master-service  NodePort  10.43.111.61  <none>       80:31329/TCP  0s
==> v1/Deployment
NAME                                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
gerrit-master-gerrit-master-deployment  1        1        1           0          0s
==> v1beta1/CronJob
NAME                  SCHEDULE      SUSPEND  ACTIVE  LAST SCHEDULE  AGE
gerrit-master-git-gc  0 6,18 * * *  False    0       <none>         0s
==> v1beta1/Ingress
NAME                                 HOSTS            ADDRESS  PORTS  AGE
gerrit-master-gerrit-master-ingress  s2.onap.info  80       0s
==> v1/Pod(related)
NAME                                                     READY  STATUS   RESTARTS  AGE
gerrit-master-gerrit-master-deployment-7cb7f96767-xz45w  0/1    Pending  0         0s
NOTES:
A Gerrit master has been deployed.
==================================
Gerrit may be accessed under: s2.onap.info


kubectl get pvc --all-namespaces
NAMESPACE   NAME                                   STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS     AGE
default     gerrit-master-gerrit-master-db-pvc     Pending                                       default          4m
default     gerrit-master-gerrit-master-logs-pvc   Pending                                       default          4m
default     gerrit-master-git-filesystem-pvc       Pending                                       shared-storage   4m
default     gerrit-master-git-gc-logs-pvc          Pending                                       default          4m

kubectl describe pod gerrit-master-gerrit-master-deployment-7cb7f96767-xz45w -n default
Events:
  Type     Reason            Age               From               Message
  ----     ------            ----              ----               -------
  Warning  FailedScheduling  4s (x17 over 2m)  default-scheduler  pod has unbound PersistentVolumeClaims

# evidently missing nfs dirs
ubuntu@bell2:~/google$ sudo helm list
NAME         	REVISION	UPDATED                 	STATUS  	CHART              	NAMESPACE
gerrit-master	1       	Wed Mar 20 19:03:40 2019	DEPLOYED	gerrit-master-0.1.0	default  

ubuntu@bell2:~/google$ sudo helm delete gerrit-master --purge
release "gerrit-master" deleted



# via https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner

...

https://github.com/helm/charts/blob/master/stable/nfs-server-provisioner/values.yaml

(look at https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client)

Code Block
themeMidnight
on vm2 in ~/google

sudo cp gerrit-master/values.yaml .
sudo vi values.yaml 
# added hostname, key, cert

sudo helm install ./gerrit-master -n gerrit-master -f values.yaml 
NAME:   gerrit-master
LAST DEPLOYED: Wed Mar 20 19:03:40 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
sudo helm install stable/nfs-server-provisioner --name nfs-server-prov
NAME:   nfs-server-prov
LAST DEPLOYED: Wed Mar 20 19:31:04 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME                                      READY  STATUS             RESTARTS  AGE
nfs-server-prov-nfs-server-provisioner-0  0/1    ContainerCreating  0         0s
==> v1/StorageClass
NAME  PROVISIONER                                           AGE
nfs   cluster.local/nfs-server-prov-nfs-server-provisioner  0s
==> v1/SecretServiceAccount
NAME                                       TYPE    DATASECRETS  AGE
gerritnfs-masterserver-gerritprov-masternfs-secureserver-configprovisioner  1 Opaque  1     0s
==> v1/ConfigMapClusterRole
NAME                                   DATA  AGE
gerritnfs-server-masterprov-gerritnfs-master-configmap  2server-provisioner     0s
==> v1/PersistentVolumeClaimClusterRoleBinding
NAME                                    AGE
nfs-server-prov-nfs-server-provisioner  0s
==> v1/Service
NAME            STATUS   VOLUME          CAPACITY  ACCESS MODES  STORAGECLASS  AGE gerrit-master-gerrit-master-logs-pvc  Pending TYPE default      CLUSTER-IP   0s gerrit-master-gerrit-master-db-pvc    Pending  defaultEXTERNAL-IP  PORT(S)             0s gerrit-master-git-gc-logs-pvc         Pending  default         0s
gerrit-master-git-filesystem-pvc      Pending  shared-storage  0s
==> v1/Service
NAME           AGE
nfs-server-prov-nfs-server-provisioner  ClusterIP  10.43.249.72  <none>       2049/TCP,20048/TCP,51413/TCP,51413/UDP  0s
==> v1beta2/StatefulSet
NAME                      TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)  DESIRED   CURRENT  AGE
gerritnfs-server-masterprov-gerritnfs-masterserver-serviceprovisioner  1 NodePort  10.43.111.61  <none>   1    80:31329/TCP  0s ==> v1/Deployment
NAME                                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
gerrit-master-gerrit-master-deployment  1        1        1 0s
NOTES:
The NFS Provisioner service has now been installed.
A storage class named 'nfs' has now been created
and is available to provision dynamic volumes.
You can use this storageclass by creating a `PersistentVolumeClaim` with the
correct storageClassName attribute. For example:
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-dynamic-volume-claim
   0 spec:
      storageClassName: "nfs"
0s   ==> v1beta1/CronJob NAME accessModes:
        - ReadWriteOnce
      SCHEDULEresources:
     SUSPEND  ACTIVE requests:
LAST SCHEDULE  AGE gerrit-master-git-gc  0 6,18 * * *storage: 100Mi
False

default  0       <none>nfs-server-prov-nfs-server-provisioner-0   1/1      0s ==> v1beta1/Ingress
NAMERunning     0          1m


kubectl describe pvc gerrit-master-gerrit-master-db-pvc 
Events:
  Warning  ProvisioningFailed  13s (x6 over 1m)  HOSTSpersistentvolume-controller  storageclass.storage.k8s.io "default" not found


# set creation to ADDRESStrue under PORTSstorageClass (default AGE
gerrit-master-gerrit-master-ingress  s2.onap.info  80       0s
==> v1/Pod(related)
NAME                    and shared)
create: true


# further
 ExternalProvisioning  3s (x2 over 18s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "nfs" or manually created by system administrator




# need to do a detailed dive into SC provisioners


# i have this unbound PVC - because I have not created the NFS share yet READYvia the STATUSprov
default  RESTARTS  AGE gerrit-master-gerritgit-master-deployment-7cb7f96767-xz45wfilesystem-pvc   0/1    Pending  0          0s NOTES: A Gerrit master has been deployed. ================================== Gerrit may be accessed under: s2.onap.info   kubectl get pvc --all-namespaces NAMESPACE   NAME   shared-storage   2m
# want this bound PVC+PV
inf         gerrit-var-gerrit-review-site      Bound     pvc-6d2c642b-c278-11e8-8679-f4034344e778   STATUS6Gi    VOLUME    CAPACITYRWX   ACCESS MODES   STORAGECLASS    nfs-sc AGE default 174d
   gerrit-master-gerrit-master-db-pvcpvc-6d2c642b-c278-11e8-8679-f4034344e778   6Gi  Pending      RWX           Delete          Bound     inf/gerrit-var-gerrit-review-site      nfs-sc default          4m default 174d

Rest API

Code Block
themeRDark
curl -i gerrit-master-gerrit-master-logs-pvc   Pending                                       default          4m
default     gerrit-master-git-filesystem-pvc       Pending                                       shared-storage   4m
default     gerrit-master-git-gc-logs-pvc          Pending                                       default          4m

kubectl describe pod gerrit-master-gerrit-master-deployment-7cb7f96767-xz45w -n default
Events:
  Type     Reason            Age     -H "Accept: application/json" http://server:8080/config/server/info
curl -i -H "Accept: application/json" http://server:8080/config/server/version
# reload config
# don't use --digest and add /a for authenticated posts
curl --user admin:myWWv -X POST http://server:8080/a/config/server/reload


[2019-03-27 03:56:21,778] [HTTP-113] INFO  com.google.gerrit.server.config.GerritServerConfigReloader : Starting server configuration reload
[2019-03-27 03:56:21,781] [HTTP-113] INFO  com.google.gerrit.server.config.GerritServerConfigReloader : Server configuration reload completed succesfully


Jenkins

Nexus

GoCD

GitLab


Links

https://kubernetes.io/docs/concepts/storage/storage-classes/



Baseline Testing

Verify your environment by installing the default mysql chart

Code Block
languagebash
themeMidnight
ubuntu@ip-172-31-3-87:~$ sudo helm install --name mysqldb --set mysqlRootPassword=myrootpass,mysqlUser=myuser,mysqlPassword=mypass,mysqlDatqbase=mydb stable/mysq
NAME:   mysqldb
LAST DEPLOYED: Thu Mar 21 16:06:02 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME          DATA  AGE
mysqldb-test  1     0s
==> v1/PersistentVolumeClaim
NAME     STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mysqldb  Pending  0s
==> v1/Service
NAME     TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
mysqldb  ClusterIP  10.43.186.39  <none>       3306/TCP  0s
==> v1beta1/Deployment
NAME     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mysqldb  1        1  From      1         Message  0 ----     ------    0s
==> v1/Pod(related)
NAME     ----              ----  READY  STATUS   RESTARTS  AGE
mysqldb-979887bcf-4hf59  0/1   ------- Pending  Warning0  FailedScheduling  4s (x17 over 2m)  default-scheduler  pod has unbound PersistentVolumeClaims

# evidently missing nfs dirs
ubuntu@bell2:~/google$ sudo helm list
NAME   0s
==> v1/Secret
NAME     TYPE    DATA  AGE
mysqldb  Opaque  2      	REVISION	UPDATED                 	STATUS  	CHART    0s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysqldb.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default 	NAMESPACEmysqldb gerrit-master	1       	Wed Mar 20 19:03:40 2019	DEPLOYED	gerrit-master-0.1.0	default  

ubuntu@bell2:~/google$ sudo helm delete gerrit-master --purge
release "gerrit-master" deleted



# via https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner
sudo helm install stable/nfs-server-provisioner --name nfs-server-prov
NAME:   nfs-server-prov
LAST DEPLOYED: Wed Mar 20 19:31:04 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME                                      READY  STATUS             RESTARTS  AGE
nfs-server-prov-nfs-server-provisioner-0  0/1    ContainerCreating  0         0s
==> v1/StorageClass
NAME  PROVISIONER                                           AGE
nfs   cluster.local/nfs-server-prov-nfs-server-provisioner  0s
==> v1/ServiceAccount
NAME            o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysqldb -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysqldb 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
   

DevOps

Kubernetes Cluster Install

Follow RKE setup OOM RKE Kubernetes Deployment#Quickstart

Kubernetes Services

Namespaces

Create a specific namespace

https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/

Code Block
themeMidnight
vi namespace-dev.json
{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "dev",
    "labels": {
      "name": "dev"
    }
  }
}
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f namespace-dev.json 
namespace/dev created
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get namespaces --show-labels
NAME            STATUS    AGE       LABELS
SECRETS default AGE nfs-server-prov-nfs-server-provisioner  1      Active  0s ==> v1/ClusterRole
NAME 5d             <none>
dev             Active    44s      AGE
nfs-server-prov-nfs-server-provisioner  0s
==> v1/ClusterRoleBinding
NAME                                    AGE
nfs-server-prov-nfs-server-provisioner  0s
==> v1/Service
NAME                                    TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)                       name=dev

Contexts

Code Block
themeMidnight
ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config set-context dev --namespace=dev --cluster=local --user=local
Context "dev" created.
ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config use-context dev
Switched to context "dev".


Storage

Volumes

https://kubernetes.io/docs/concepts/storage/volumes/

hostPath

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

Persistent Volumes

Code Block
themeMidnight
kind: PersistentVolume
apiVersion: v1
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/home/ubuntu/tools-data1"

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f hostpath-volume.yaml -n dev
persistentvolume/task-pv-volume created
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pv 
NAME           AGE nfs-server-prov-nfs-server-provisioner  ClusterIP  10.43.249.72  <none>       2049/TCP,20048/TCP,51413/TCP,51413/UDP  0s
==> v1beta2/StatefulSet
NAME  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
task-pv-volume   5Gi        RWO           DESIRED Retain CURRENT  AGE nfs-server-prov-nfs-server-provisioner  1     Available   1        0s NOTES: Themanual NFS Provisioner service has now been installed. A storage class named 'nfs' has now been created and is available2m
to
provision
dynamic volumes.
You can use this storageclass by creating a `PersistentVolumeClaim` with the
correct storageClassName attribute. For example

Persistent Volume Claims

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim

Code Block
themeMidnight
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    --- ReadWriteOnce
   kindresources:
PersistentVolumeClaim    requests:
apiVersion: v1     metadatastorage: 3Gi

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create  name: test-dynamic-volume-f hostpath-pvc.yaml -n dev
persistentvolumeclaim/task-pv-claim created


 spec:
      storageClassName: "nfs"# check bound status
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pv 
NAME          accessModes:   CAPACITY   ACCESS  MODES - ReadWriteOnce RECLAIM POLICY   STATUS  resources:  CLAIM       requests:        STORAGECLASS   storage:REASON 100Mi   defaultAGE
task-pv-volume   5Gi     nfs-server-prov-nfs-server-provisioner-0   1/1RWO       Running     0Retain          1m Bound  kubectl describe pvc gerritdev/task-master-gerrit-master-db-pvcpv-claim  Events: manual  Warning  ProvisioningFailed  13s (x6 over 1m)  persistentvolume-controller  storageclass.storage.k8s.io "default" not found   #7m
set creation to true under storageClass (default and shared)
create: true


# further
 ExternalProvisioning  3s (x2 over 18s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "nfs" or manually created by system administrator




# need to do a detailed dive into SC provisioners


# i have this unbound PVC - because I have not created the NFS share yet via the prov
default     gerrit-master-git-filesystem-pvc       Pending                      
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pvc -n dev
NAME            STATUS    VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
task-pv-claim   Bound     task-pv-volume   5Gi        RWO            manual         1m


vi pv-pod.yaml


kind: Pod
apiVersion: v1
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
         shared-storageclaimName: task-pv-claim
  2mcontainers:
# want this bound PVC+PV
inf - name: task-pv-container
       gerrit-var-gerrit-review-siteimage: nginx
     Bound ports:
   pvc-6d2c642b-c278-11e8-8679-f4034344e778   6Gi  - containerPort: 80
   RWX       name: "http-server"
  nfs-sc   174d
pvc-6d2c642b-c278-11e8-8679-f4034344e778 volumeMounts:
  6Gi      -  RWXmountPath: "/usr/share/nginx/html"
          Delete          Bound     inf/gerrit-var-gerrit-review-site      nfs-scname: task-pv-storage

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f pv-pod.yaml -n dev
pod/task-pv-pod created


ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pods -n dev
NAME          READY     174d

Jenkins

Nexus

GoCD

GitLab

Baseline Testing

Verify your environment by installing the default mysql chart

Code Block
languagebash
themeMidnight
ubuntu@ip-172-31-3-87:~$ sudo helm install --name mysqldb --set mysqlRootPassword=myrootpass,mysqlUser=myuser,mysqlPassword=mypass,mysqlDatqbase=mydb stable/mysq
NAME:   mysqldb
LAST DEPLOYED: Thu Mar 21 16:06:02 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME          DATA  AGE
mysqldb-test  1     0s
==> v1/PersistentVolumeClaim
NAME     STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mysqldb  Pending  0s
==> v1/Service
NAME     TYPESTATUS    RESTARTS   AGE
task-pv-pod   1/1       Running   0          53s


# test
ubuntu@ip-172-31-30-234:~$ vi tools-data1/index.html
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl exec -it task-pv-pod -n dev bash
root@task-pv-pod:/# apt-get update; 
apt-get install curl
root@task-pv-pod:/# curl localhost
hello world


Storage Classes

https://kubernetes.io/docs/concepts/storage/storage-classes/

Design Issues

DI 0: Raw Docker Gerrit Container for reference - default H2

https://gerrit.googlesource.com/docker-gerrit/

Code Block
themeMidnight

sudo docker run --name gerrit -ti -d -p 8080:8080 -p 29418:29418 gerritcodereview/gerrit
ubuntu@ip-172-31-15-176:~$ sudo docker ps
CONTAINER ID        IMAGE         CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE mysqldb  ClusterIPCOMMAND  10.43.186.39  <none>       3306/TCP  0s ==> v1beta1/Deployment NAME  CREATED   DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE mysqldb STATUS 1        1     PORTS   1           0          0s ==> v1/Pod(related) NAME                   NAMES
83cfd4a6492e READY  STATUS   RESTARTS  AGE
mysqldb-979887bcf-4hf59  0/1    Pending  0  gerritcodereview/gerrit   "/bin/sh -c 'git con…"   3 minutes ago       0s
==> v1/Secret
NAMEUp 3 minutes     TYPE    DATA  AGE
mysqldb  Opaque  2     0s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysqldb.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysqldb -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysqldb -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysqldb 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
   

Links

https://kubernetes.io/docs/concepts/storage/storage-classes/

Baseline Testing

Verify your environment by installing the default mysql chart

Code Block
languagebash
themeMidnight
ubuntu@ip-172-31-3-87:~$ sudo helm install --name mysqldb --set mysqlRootPassword=myrootpass,mysqlUser=myuser,mysqlPassword=mypass,mysqlDatqbase=mydb stable/mysq
NAME:   mysqldb
LAST DEPLOYED: Thu Mar 21 16:06:02 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME          DATA  AGE
mysqldb-test  1     0s
==> v1/PersistentVolumeClaim
NAME     STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mysqldb  Pending  0s
==> v1/Service
NAME     TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
mysqldb  ClusterIP  10.43.186.39  <none>       3306/TCP  0s
==> v1beta1/Deployment
NAME     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mysqldb  1        1        1           0          0s
==> v1/Pod(related)
NAME                     READY  STATUS   RESTARTS  AGE
mysqldb-979887bcf-4hf59  0/1    Pending  0         0s
==> v1/Secret
NAME     TYPE    DATA  AGE
mysqldb  Opaque  2     0s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysqldb.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysqldb -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysqldb -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysqldb 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
   

DevOps

Kubernetes Cluster Install

Follow RKE setup OOM RKE Kubernetes Deployment#Quickstart

Kubernetes Services

Namespaces

Create a specific namespace

https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/

Code Block
themeMidnight
vi namespace-dev.json
{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "dev",
    "labels": {
      "name": "dev"
    }
  }
}
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f namespace-dev.json 
namespace/dev created
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get namespaces --show-labels
NAME            STATUS    AGE       LABELS
default         Active    5d        <none>
dev             Active    44s       name=dev

Contexts

Code Block
themeMidnight
ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config set-context dev --namespace=dev --cluster=local --user=local
Context "dev" created.
ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config use-context dev
Switched to context "dev".

Storage

Volumes

https://kubernetes.io/docs/concepts/storage/volumes/

hostPath

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

Persistent Volumes

Code Block
themeMidnight
kind: PersistentVolume
apiVersion: v1
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/home/ubuntu/tools-data1"

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f hostpath-volume.yaml -n dev
persistentvolume/task-pv-volume created
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pv 
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
task-pv-volume   5Gi        RWO            Retain           Available             manual                   2m



Persistent Volume Claims

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim

Code Block
themeMidnight
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f hostpath-pvc.yaml -n dev
persistentvolumeclaim/task-pv-claim created


# check bound status
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pv 
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM               STORAGECLASS   REASON    AGE
task-pv-volume   5Gi        RWO            Retain           Bound     dev/task-pv-claim   manual                   7m

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pvc -n dev
NAME            STATUS    VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
task-pv-claim   Bound     task-pv-volume   5Gi        RWO            manual         1m


vi pv-pod.yaml


kind: Pod
apiVersion: v1
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
       claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"0.0.0.0:8080->8080/tcp, 0.0.0.0:29418->29418/tcp   nifty_einstein
# check http://localhost:8080


#

Image Added

Code Block
themeRDark
# copy key
sudo scp ~/wse_onap/onap_rsa ubuntu@gerrit2.ons.zone:~/
ubuntu@ip-172-31-31-191:~$ sudo chmod 400 onap_rsa
ubuntu@ip-172-31-31-191:~$ sudo cp onap_rsa ~/.ssh/
ubuntu@ip-172-31-31-191:~$ sudo chown ubuntu:ubuntu ~/.ssh/onap_rsa

# cat your key
ssh-keyscan -t rsa gerrit2.ons.zone
in the format gerrit2.ons.zone ssh-rsa key



# add ~/.ssh/config
Host remote-alias gerrit2.ons.zone
  IdentityFile ~/.ssh/onap_rsa
  Hostname gerrit2.ons.zone
  Protocol 2
  HostKeyAlgorithms ssh-rsa,ssh-dss
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null


# add pub key to gerrit

#create user, repo,pw
s0 admin
4zZvLiKKHWOvMBeRWZwUR5ls0SpPbgphEpyT1K3KLQ
gerrit
 eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA

s2
admin
myWWvmVLQfEpIzhGtcXWHKqxtHsSr31DXM4VXmcy1g

s4

test clone using admin user
git clone "ssh://admin@gerrit3.ons.zone:29418/test" && scp -p -P 29418 admin@gerrit3.ons.zone:hooks/commit-msg "test/.git/hooks/"


Docker compose

The template on https://hub.docker.com/r/gerritcodereview/gerrit has its indentation off for services.gerrit.volumes

Code Block
themeRDark
sudo curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

docker-compose.yaml
version: '3'
services:
  gerrit:
    image: gerritcodereview/gerrit
    volumes:
     - git-volume:/var/gerrit/git
     - db-volume:/var/gerrit/db
     - index-volume:/var/gerrit/index
     - cache-volume:/var/gerrit/cache
     # added
     - config-volume:/var/gerrit/etc
    ports:
     - "29418:29418"
     - "8080:8080"
volumes:
  git-volume:
  db-volume:
  index-volume:
  cache-volume:
  config-volume:

ubuntu@ip-172-31-31-191:~$ docker-compose up -d gerrit
Starting ubuntu_gerrit_1 ... done


todo: missing for replication.config
config-volume:/var/gerrit/etc

ubuntu@ip-172-31-31-191:~$ docker-compose up -d gerrit
Creating network "ubuntu_default" with the default driver
Creating volume "ubuntu_config-volume" with default driver
Creating ubuntu_gerrit_1 ... done


DI 1: Kubernetes Gerrit Deployment - no HELM


DI 2: Helm Gerrit Deployment

DI 3: Gerrit Replication

https://gerrit.googlesource.com/plugins/replication/+doc/master/src/main/resources/Documentation/config.md

Code Block
themeRDark
# add the remote key to known_hosts
ubuntu@ip-172-31-15-176:~$ sudo ssh -i ~/.ssh/onap_rsa ubuntu@gerrit2.ons.zone

bash-4.2$ cat /var/gerrit/etc/gerrit.config
[gerrit]
	basePath = git
	serverId = 872dafaa-3220-4d2c-8f14-a191eec43a56
	canonicalWebUrl = http://487707f31650
[database]
	type = h2
	database = db/ReviewDB
[index]
	type = LUCENE
[auth]
	type = DEVELOPMENT_BECOME_ANY_ACCOUNT
[sendemail]
	smtpServer = localhost
[sshd]
	listenAddress = *:29418
[httpd]
	listenUrl = http://*:8080/
	filterClass = com.googlesource.gerrit.plugins.ootb.FirstTimeRedirect
	firstTimeRedirectUrl = /login/%23%2F?account_id=1000000
[cache]
	directory = cache
[plugins]
	allowRemoteAdmin = true
[container]
	javaOptions = "-Dflogger.backend_factory=com.google.common.flogger.backend.log4j.Log4jBackendFactory#getInstance"
	javaOptions = "-Dflogger.logging_context=com.google.gerrit.server.logging.LoggingContext#getInstance"
	user = gerrit
	javaHome = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre
	javaOptions = -Djava.security.egd=file:/dev/./urandom
[receive]
	enableSignedPush = false
[noteDb "changes"]
	autoMigrate = true


added
[remote "gerrit.ons.zone"]
    url = admin@gerrit.ons.zone:/some/path/test.git
[remote "pubmirror"]
    url = gerrit.ons.zone:/pub/git/test.git
    push = +refs/heads/*:refs/heads/*
    push = +refs/tags/*:refs/tags/*
    threads = 3
    authGroup = Public Mirror Group
    authGroup = Second Public Mirror Group


20190327
obrienbiometrics:radar michaelobrien$ curl --user gerrit:JfJHDjTgZTT59FWY4KUza6MOvVChtO7dheffqbpLzQ -X POST http://gerrit.ons.zone:8080/config/server/reload
Authentication required
obrienbiometrics:radar michaelobrien$ curl --digest --user gerrit:JfJHDjTgZTT59FWY4KUza6MOvVChtO7dheffqbpLzQ -X POST http://gerrit.ons.zone:8080/config/server/reload
Authentication required
obrienbiometrics:radar michaelobrien$ curl --digest --user gerrit:eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA -X POST http://gerrit.ons.zone:8080/config/server/reload
Authentication required
obrienbiometrics:radar michaelobrien$ curl --digest --user gerrit:eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA -X POST http://gerrit.ons.zone:8080/a/config/server/reload
Unauthorizedobrienbiometrics:radar michaelobrien$ curl  --user gerrit:eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA -X POST http://gerrit.ons.zone:8080/a/config/server/reload
curl: option --uit:eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA: is unknown
curl: try 'curl --help' or 'curl --manual' for more information
obrienbiometrics:radar michaelobrien$ curl --user gerrit:eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA -X POST http://gerrit.ons.zone:8080/a/config/server/reload
administrate server not permitted

obrienbiometrics:radar michaelobrien$ curl --user admin:4zZvLiKKHWOvMBeRWZwUR5ls0SpPbgphEpyT1K3KLQ -X POST http://gerrit.ons.zone:8080/a/config/server/reload
)]}'
{}


curl --user admin:myWWvmVLQfEpIzhGtcXWHKqxtHsSr31DXM4VXmcy1g -X POST http://gerrit2.ons.zone:8080/a/config/server/reload
[2019-03-27 03:56:21,778] [HTTP-113] INFO  com.google.gerrit.server.config.GerritServerConfigReloader : Starting server configuration reload
[2019-03-27 03:56:21,781] [HTTP-113] INFO  com.google.gerrit.server.config.GerritServerConfigReloader : Server configuration reload completed succesfully

 curl --user admin:4zZvLiKKHWOvMBeRWZwUR5ls0SpPbgphEpyT1K3KLQ -X POST http://gerrit.ons.zone:8080/a/config/server/reload

no affect
obrienbiometrics:gerrit michaelobrien$ sudo ssh -p 29418 admin@gerrit.ons.zone replication list
obrienbiometrics:gerrit michaelobrien$ sudo ssh -p 29418 admin@gerrit.ons.zone replication start --all

obrienbiometrics:gerrit michaelobrien$ sudo ssh -p 29418 gerrit@gerrit.ons.zone replication start --all
startReplication for plugin replication not permitted

further
obrienbiometrics:gerrit michaelobrien$ sudo ssh -p 29418 admin@gerrit.ons.zone gerrit plugin reload replication
fatal: Unable to provision, see the following errors:


1) Error injecting constructor, org.eclipse.jgit.errors.ConfigInvalidException: remote.gerrit2.url "gerrit2.ons.zone:8080/test.git" lacks ${name} placeholder in /var/gerrit/etc/replication.config
fix
sudo docker exec -it nifty_einstein bash
bash-4.2$ vi /var/gerrit/etc/replication.config


  url = gerrit2.ons.zone:8080/${name}.git

ssh -p 29418 admin@gerrit.ons.zone gerrit plugin reload replication
[2019-03-28 03:27:02,329] [SSH gerrit plugin reload replication (admin)] INFO  com.google.gerrit.server.plugins.PluginLoader : Reloading plugin replication
[2019-03-28 03:27:02,507] [SSH gerrit plugin reload replication (admin)] INFO  com.google.gerrit.server.plugins.PluginLoader : Unloading plugin replication, version v2.16.6
[2019-03-28 03:27:02,513] [SSH gerrit plugin reload replication (admin)] INFO  com.google.gerrit.server.plugins.PluginLoader : Reloaded plugin replication, version v2.16.6


obrienbiometrics:gerrit michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication start --all

# need to create the mirror repo first - before replication
git clone "ssh://admin@gerrit.ons.zone:29418/test"


obrienbiometrics:test michaelobrien$ vi test.sh
obrienbiometrics:test michaelobrien$ git add test.sh 
obrienbiometrics:test michaelobrien$ ls
test.sh
obrienbiometrics:test michaelobrien$ git status
On branch master
Your branch is up to date with 'origin/master'.

Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)
	new file:   test.sh

obrienbiometrics:test michaelobrien$ git commit -m "replication test 1"
[master ff27d21] replication test 1
 1 file changed, 1 insertion(+)
 create mode 100644 test.sh
obrienbiometrics:test michaelobrien$ git commit -s --amend
[master 609a5d5] replication test 1
 Date: Wed Mar 27 23:54:38 2019 -0400
 1 file changed, 1 insertion(+)
 create mode 100644 test.sh
obrienbiometrics:test michaelobrien$ git review
Your change was committed before the commit hook was installed.
Amending the commit to add a gerrit change id.
Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts.
remote: Processing changes: refs: 1, new: 1, done            
remote: 
remote: SUCCESS        
remote: 
remote: New Changes:        
remote:   http://83cfd4a6492e/c/test/+/1001 replication test 1        
remote: Pushing to refs/publish/* is deprecated, use refs/for/* instead.        
To ssh://gerrit.ons.zone:29418/test
 * [new branch]      HEAD -> refs/publish/master


[2019-03-28 03:55:17,606] [ReceiveCommits-1] INFO  com.googlesource.gerrit.plugins.hooks.HookFactory : hooks.commitReceivedHook resolved to /var/gerrit/hooks/commit-received [CONTEXT RECEIVE_ID="test-1553745317588-f20ce7db" ]
[2019-03-28 03:55:17,985] [ReceiveCommits-1] INFO  com.googlesource.gerrit.plugins.hooks.HookFactory : hooks.patchsetCreatedHook resolved to /var/gerrit/hooks/patchset-created [CONTEXT RECEIVE_ID="test-1553745317588-f20ce7db" ]

in gerrit +2 and merge
[2019-03-28 03:56:53,148] [HTTP-232] INFO  com.googlesource.gerrit.plugins.hooks.HookFactory : hooks.commentAddedHook resolved to /var/gerrit/hooks/comment-added
[2019-03-28 03:57:06,388] [HTTP-240] INFO  com.googlesource.gerrit.plugins.hooks.HookFactory : hooks.submitHook resolved to /var/gerrit/hooks/submit [CONTEXT SUBMISSION_ID="1001-1553745426374-726360d5" ]
[2019-03-28 03:57:06,512] [HTTP-240] INFO  com.googlesource.gerrit.plugins.hooks.HookFactory : hooks.changeMergedHook resolved to /var/gerrit/hooks/change-merged [CONTEXT SUBMISSION_ID="1001-1553745426374-726360d5" ]

obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication list
Remote: gerrit2
Url: gerrit2.ons.zone:8080/${name}.git

verifying
bash-4.2$ vi /var/gerrit/etc/replication.config

[remote "gerrit2"]
  url = gerrit2.ons.zone:/${name}.git
  push = +refs/heads/*:refs/heads/*
  push = +refs/tags/*:refs/tags/*
  threads = 3
  authGroup = Public Mirror Group
  authGroup = Second Public Mirror Group

tried both
obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication list
Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts.
Remote: gerrit2
Url: git@gerrit2.ons.zone:/${name}.git

obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication start --all --now
Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts.
obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone gerrit plugin reload replication
Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts.
obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication list
Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts.
Remote: gerrit2
Url: gerrit2@gerrit2.ons.zone:/${name}.git

obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication start --all --now
Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts.

set debug
obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone gerrit logging set DEBUG                     name: task-pv-storage  ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f pv-pod.yaml -n dev pod/task-pv-pod created   ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pods -n dev NAME   

ssh -p 29418 admin@gerrit.ons.zone gerrit READYlogging set reset

stopped
STATUSobrienbiometrics:test michaelobrien$ ssh -p RESTARTS29418 admin@gerrit.ons.zone replication AGEstart task-pv-podwait --all 
1/1Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the Runninglist of known 0hosts.

[2019-03-28 04:43:12,056] [SSH replication start --wait --all (admin)] 53s


# test
ubuntu@ip-172-31-30-234:~$ vi tools-data1/index.html
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl exec -it task-pv-pod -n dev bash
root@task-pv-pod:/# apt-get update; 
apt-get install curl
root@task-pv-pod:/# curl localhost
hello world

Storage Classes

https://kubernetes.io/docs/concepts/storage/storage-classes/

Design Issues

DI 0: Raw Docker Gerrit Container for reference

https://gerrit.googlesource.com/docker-gerrit/

Code Block
themeMidnight

sudo docker run -ti -d -p 8080:8080 -p 29418:29418 gerritcodereview/gerritERROR com.google.gerrit.sshd.BaseCommand : Internal server error (user admin account 1000000) during replication start --wait --all
org.apache.sshd.common.channel.exception.SshChannelClosedException: flush(ChannelOutputStream[ChannelSession[id=0, recipient=0]-ServerSessionImpl[admin@/207.236.250.131:4058]] SSH_MSG_CHANNEL_DATA) length=0 - stream is already closed
	at org.apache.sshd.common.channel.ChannelOutputStream.flush(ChannelOutputStream.java:174)

tested replication to gitlab - pull ok
https://gitlab.com/obriensystems/test


Links

https://kubernetes.io/docs/concepts/storage/storage-classes/