Gerrit
Config Jobs
login as default admin
create test repo
gerrit source - /var/gerrit/etc/replication.config
[remote "gerrit2"] url = admin@gerrit2.ons.zone:29418/${name}.git push = +refs/heads/*:refs/heads/* push = +refs/tags/*:refs/tags/*
Replication
fixed in https://www.gerritcodereview.com/2.14.html
Replication Use Case - commit change
Make change on gerrit, merge, kick off replication job, view change on gerrit2
# 3 machines # obriensystems dev laptop # gerrit source server # gerrit2 replication server # on remote dev host - against gerrit git clone "ssh://admin@gerrit.ons.zone:29418/test" && scp -p -P 29418 admin@gerrit.ons.zone:hooks/commit-msg "test/.git/hooks/" cd test/ vi test.sh git add test.sh git commit -s --amend git review # getting merge conflict - needed to remove old commit id vi test.sh git add test.sh git rebase --continue git review # move to gerrit UI, +2 review, merge # on gerrit server ssh ubuntu@gerrit.ons.zone # tail the logs to the gerrit container # on dev laptop obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone gerrit plugin reload replication [2019-03-28 15:25:57,246] [SSH gerrit plugin reload replication (admin)] INFO com.google.gerrit.server.plugins.PluginLoader : Reloaded plugin replication, version v2.16.6 obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication list Remote: gerrit2 Url: admin@gerrit2.ons.zone:8080/${name}.git [2019-03-28 15:26:57,963] [WorkQueue-1] INFO com.google.gerrit.server.plugins.CleanupHandle : Cleaned plugin plugin_replication_190328_0446_6094540689096397413.jar # debug on ssh -p 29418 admin@gerrit.ons.zone gerrit logging set DEBUG # debug off ssh -p 29418 admin@gerrit.ons.zone gerrit logging set reset ssh -p 29418 admin@gerrit.ons.zone replication start --wait --all # nothing yet - debugging container I only see a recent var/gerrit/data/replication/ref-updates -rw-r--r-- 1 gerrit gerrit 45 Mar 28 15:25 9cbb43eb3ce03badc8b3c7dc52ef84d8d6e67066 bash-4.2$ cat 9cbb43eb3ce03badc8b3c7dc52ef84d8d6e67066 {"project":"test","ref":"refs/heads/master"} Issue was the key - after changing the url to url = admin@gerrit2.ons.zone:29418/${name}.git I can ssh directly from gerrit to gerrit2 but they key is n/a for the container yet sshd_log [2019-03-28 15:57:50,164 +0000] b2bd0870 admin a/1000000 replication.start.--all 3ms 1ms 0 replication_log [2019-03-28 17:34:07,816] [72da30d3] Replication to admin@gerrit2.ons.zone:29418/All-Users.git started... [2019-03-28 17:34:07,834] [72da30d3] Cannot replicate to admin@gerrit2.ons.zone:29418/All-Users.git org.eclipse.jgit.errors.TransportException: admin@gerrit2.ons.zone:29418/All-Users.git: reject HostKey: gerrit2.ons.zone at org.eclipse.jgit.transport.JschConfigSessionFactory.getSession(JschConfigSessionFactory.java:192) # I am running hashed ubuntu@ip-172-31-15-176:~$ grep "HashKnownHosts" /etc/ssh/ssh_config HashKnownHosts yes
According to https://groups.google.com/forum/#!topic/repo-discuss/9PTfVG8vdAU for https://github.com/eclipse/jgit/blob/master/org.eclipse.jgit/src/org/eclipse/jgit/transport/JschConfigSessionFactory.java#L191
the known_hosts file encoding is the issue - needs to be ssh-rsa not ecdsa-sha2-nistp256 which jgit is unhappy with.
ubuntu@ip-172-31-15-176:~$ cat ~/.ssh/known_hosts |1|RFSqL1D1fCROw=|fcc8BqvMOekw0RLOz7Ts= ecdsa-sha2-nistp256 AAAAE...akI= fix ubuntu@ip-172-31-15-176:~$ ssh -v ubuntu@gerrit2.ons.zone 2>&1 | grep ~/.ssh/known_hosts debug1: Found key in /home/ubuntu/.ssh/known_hosts:2 ubuntu@ip-172-31-15-176:~$ sudo vi ~/.ssh/config Host gerrit2.ons.zone IdentityFile ~/.ssh/onap_rsa to Host remote-alias gerrit2.ons.zone IdentityFile ~/.ssh/onap_rsa Hostname gerrit2.ons.zone Protocol 2 HostKeyAlgorithms ssh-rsa,ssh-dss # however with the fix - we see the correct known_hosts format but still rejected ssh -p 29418 admin@gerrit.ons.zone replication start --all 2019-03-28 20:21:22,239] [] scheduling replication All-Projects:..all.. => admin@gerrit2.ons.zone:29418/All-Projects.git [2019-03-28 20:21:22,240] [] scheduled All-Projects:..all.. => [4e4e425c] push admin@gerrit2.ons.zone:29418/All-Projects.git to run after 15s [2019-03-28 20:21:22,240] [] scheduling replication All-Users:..all.. => admin@gerrit2.ons.zone:29418/All-Users.git [2019-03-28 20:21:22,241] [] scheduled All-Users:..all.. => [8e58ba23] push admin@gerrit2.ons.zone:29418/All-Users.git to run after 15s [2019-03-28 20:21:22,241] [] scheduling replication test:..all.. => admin@gerrit2.ons.zone:29418/test.git [2019-03-28 20:21:22,241] [] scheduled test:..all.. => (retry 1) [ae725e99] push admin@gerrit2.ons.zone:29418/test.git to run after 15s [2019-03-28 20:21:31,880] [ae725e99] Replication to admin@gerrit2.ons.zone:29418/test.git started... [2019-03-28 20:21:31,939] [ae725e99] Cannot replicate to admin@gerrit2.ons.zone:29418/test.git org.eclipse.jgit.errors.TransportException: admin@gerrit2.ons.zone:29418/test.git: reject HostKey: gerrit2.ons.zone at org.eclipse.jgit.transport.JschConfigSessionFactory.getSession(JschConfigSessionFactory.java:192)
Replication Use Case - new Repo
This should replicate to the slave according to https://gerrit-review.googlesource.com/c/plugins/replication/+/49728/5/src/main/resources/Documentation/config.md
via createMissingRepositories which is default true
# action create in gui new http://gerrit.ons.zone:8080/admin/repos/test2 # in container on gerrit1 bash-4.2$ ls -la /var/gerrit/data/replication/ref-updates/ -rw-r--r-- 1 gerrit gerrit 46 Mar 28 15:45 608db0817a4694dc10ee1e0811c2f76b27d3d03f bash-4.2$ cat 608db0817a4694dc10ee1e0811c2f76b27d3d03f {"project":"test2","ref":"refs/heads/master"}
Helm Charts
Or get the yamls via
https://gerrit.googlesource.com/k8s-gerrit/
not
https://github.com/helm/charts/tree/master/stable
Triage
following https://gerrit.googlesource.com/k8s-gerrit/+/master/helm-charts/gerrit-master/
https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner
https://github.com/helm/charts/blob/master/stable/nfs-server-provisioner/values.yaml
(look at https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client)
on vm2 in ~/google sudo cp gerrit-master/values.yaml . sudo vi values.yaml # added hostname, pub key, cert sudo helm install ./gerrit-master -n gerrit-master -f values.yaml NAME: gerrit-master LAST DEPLOYED: Wed Mar 20 19:03:40 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME TYPE DATA AGE gerrit-master-gerrit-master-secure-config Opaque 1 0s ==> v1/ConfigMap NAME DATA AGE gerrit-master-gerrit-master-configmap 2 0s ==> v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE gerrit-master-gerrit-master-logs-pvc Pending default 0s gerrit-master-gerrit-master-db-pvc Pending default 0s gerrit-master-git-gc-logs-pvc Pending default 0s gerrit-master-git-filesystem-pvc Pending shared-storage 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE gerrit-master-gerrit-master-service NodePort 10.43.111.61 <none> 80:31329/TCP 0s ==> v1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE gerrit-master-gerrit-master-deployment 1 1 1 0 0s ==> v1beta1/CronJob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE gerrit-master-git-gc 0 6,18 * * * False 0 <none> 0s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE gerrit-master-gerrit-master-ingress s2.onap.info 80 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE gerrit-master-gerrit-master-deployment-7cb7f96767-xz45w 0/1 Pending 0 0s NOTES: A Gerrit master has been deployed. ================================== Gerrit may be accessed under: s2.onap.info kubectl get pvc --all-namespaces NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default gerrit-master-gerrit-master-db-pvc Pending default 4m default gerrit-master-gerrit-master-logs-pvc Pending default 4m default gerrit-master-git-filesystem-pvc Pending shared-storage 4m default gerrit-master-git-gc-logs-pvc Pending default 4m kubectl describe pod gerrit-master-gerrit-master-deployment-7cb7f96767-xz45w -n default Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 4s (x17 over 2m) default-scheduler pod has unbound PersistentVolumeClaims # evidently missing nfs dirs ubuntu@bell2:~/google$ sudo helm list NAME REVISION UPDATED STATUS CHART NAMESPACE gerrit-master 1 Wed Mar 20 19:03:40 2019 DEPLOYED gerrit-master-0.1.0 default ubuntu@bell2:~/google$ sudo helm delete gerrit-master --purge release "gerrit-master" deleted # via https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner sudo helm install stable/nfs-server-provisioner --name nfs-server-prov NAME: nfs-server-prov LAST DEPLOYED: Wed Mar 20 19:31:04 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE nfs-server-prov-nfs-server-provisioner-0 0/1 ContainerCreating 0 0s ==> v1/StorageClass NAME PROVISIONER AGE nfs cluster.local/nfs-server-prov-nfs-server-provisioner 0s ==> v1/ServiceAccount NAME SECRETS AGE nfs-server-prov-nfs-server-provisioner 1 0s ==> v1/ClusterRole NAME AGE nfs-server-prov-nfs-server-provisioner 0s ==> v1/ClusterRoleBinding NAME AGE nfs-server-prov-nfs-server-provisioner 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nfs-server-prov-nfs-server-provisioner ClusterIP 10.43.249.72 <none> 2049/TCP,20048/TCP,51413/TCP,51413/UDP 0s ==> v1beta2/StatefulSet NAME DESIRED CURRENT AGE nfs-server-prov-nfs-server-provisioner 1 1 0s NOTES: The NFS Provisioner service has now been installed. A storage class named 'nfs' has now been created and is available to provision dynamic volumes. You can use this storageclass by creating a `PersistentVolumeClaim` with the correct storageClassName attribute. For example: --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-dynamic-volume-claim spec: storageClassName: "nfs" accessModes: - ReadWriteOnce resources: requests: storage: 100Mi default nfs-server-prov-nfs-server-provisioner-0 1/1 Running 0 1m kubectl describe pvc gerrit-master-gerrit-master-db-pvc Events: Warning ProvisioningFailed 13s (x6 over 1m) persistentvolume-controller storageclass.storage.k8s.io "default" not found # set creation to true under storageClass (default and shared) create: true # further ExternalProvisioning 3s (x2 over 18s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "nfs" or manually created by system administrator # need to do a detailed dive into SC provisioners # i have this unbound PVC - because I have not created the NFS share yet via the prov default gerrit-master-git-filesystem-pvc Pending shared-storage 2m # want this bound PVC+PV inf gerrit-var-gerrit-review-site Bound pvc-6d2c642b-c278-11e8-8679-f4034344e778 6Gi RWX nfs-sc 174d pvc-6d2c642b-c278-11e8-8679-f4034344e778 6Gi RWX Delete Bound inf/gerrit-var-gerrit-review-site nfs-sc 174d
Rest API
curl -i -H "Accept: application/json" http://server:8080/config/server/info curl -i -H "Accept: application/json" http://server:8080/config/server/version # reload config # don't use --digest and add /a for authenticated posts curl --user admin:myWWv -X POST http://server:8080/a/config/server/reload [2019-03-27 03:56:21,778] [HTTP-113] INFO com.google.gerrit.server.config.GerritServerConfigReloader : Starting server configuration reload [2019-03-27 03:56:21,781] [HTTP-113] INFO com.google.gerrit.server.config.GerritServerConfigReloader : Server configuration reload completed succesfully
Jenkins
Nexus
GoCD
GitLab
Links
https://kubernetes.io/docs/concepts/storage/storage-classes/
Baseline Testing
Verify your environment by installing the default mysql chart
ubuntu@ip-172-31-3-87:~$ sudo helm install --name mysqldb --set mysqlRootPassword=myrootpass,mysqlUser=myuser,mysqlPassword=mypass,mysqlDatqbase=mydb stable/mysq NAME: mysqldb LAST DEPLOYED: Thu Mar 21 16:06:02 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE mysqldb-test 1 0s ==> v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysqldb Pending 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysqldb ClusterIP 10.43.186.39 <none> 3306/TCP 0s ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE mysqldb 1 1 1 0 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE mysqldb-979887bcf-4hf59 0/1 Pending 0 0s ==> v1/Secret NAME TYPE DATA AGE mysqldb Opaque 2 0s NOTES: MySQL can be accessed via port 3306 on the following DNS name from within your cluster: mysqldb.default.svc.cluster.local To get your root password run: MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysqldb -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo) To connect to your database: 1. Run an Ubuntu pod that you can use as a client: kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il 2. Install the mysql client: $ apt-get update && apt-get install mysql-client -y 3. Connect using the mysql cli, then provide your password: $ mysql -h mysqldb -p To connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 # Execute the following command to route the connection: kubectl port-forward svc/mysqldb 3306 mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
DevOps
Kubernetes Cluster Install
Follow RKE setup OOM RKE Kubernetes Deployment#Quickstart
Kubernetes Services
Namespaces
Create a specific namespace
https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/
vi namespace-dev.json { "kind": "Namespace", "apiVersion": "v1", "metadata": { "name": "dev", "labels": { "name": "dev" } } } ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f namespace-dev.json namespace/dev created ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get namespaces --show-labels NAME STATUS AGE LABELS default Active 5d <none> dev Active 44s name=dev
Contexts
ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config set-context dev --namespace=dev --cluster=local --user=local Context "dev" created. ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config use-context dev Switched to context "dev".
Storage
Volumes
https://kubernetes.io/docs/concepts/storage/volumes/
hostPath
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
Persistent Volumes
kind: PersistentVolume apiVersion: v1 metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: "/home/ubuntu/tools-data1" ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f hostpath-volume.yaml -n dev persistentvolume/task-pv-volume created ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE task-pv-volume 5Gi RWO Retain Available manual 2m
Persistent Volume Claims
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: task-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3Gi ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f hostpath-pvc.yaml -n dev persistentvolumeclaim/task-pv-claim created # check bound status ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE task-pv-volume 5Gi RWO Retain Bound dev/task-pv-claim manual 7m ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pvc -n dev NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE task-pv-claim Bound task-pv-volume 5Gi RWO manual 1m vi pv-pod.yaml kind: Pod apiVersion: v1 metadata: name: task-pv-pod spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: task-pv-storage ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f pv-pod.yaml -n dev pod/task-pv-pod created ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pods -n dev NAME READY STATUS RESTARTS AGE task-pv-pod 1/1 Running 0 53s # test ubuntu@ip-172-31-30-234:~$ vi tools-data1/index.html ubuntu@ip-172-31-30-234:~/helm/book$ kubectl exec -it task-pv-pod -n dev bash root@task-pv-pod:/# apt-get update; apt-get install curl root@task-pv-pod:/# curl localhost hello world
Storage Classes
https://kubernetes.io/docs/concepts/storage/storage-classes/
Design Issues
DI 0: Raw Docker Gerrit Container for reference - default H2
https://gerrit.googlesource.com/docker-gerrit/
sudo docker run --name gerrit -ti -d -p 8080:8080 -p 29418:29418 gerritcodereview/gerrit ubuntu@ip-172-31-15-176:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 83cfd4a6492e gerritcodereview/gerrit "/bin/sh -c 'git con…" 3 minutes ago Up 3 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:29418->29418/tcp nifty_einstein # check http://localhost:8080 #
# copy key sudo scp ~/wse_onap/onap_rsa ubuntu@gerrit2.ons.zone:~/ ubuntu@ip-172-31-31-191:~$ sudo chmod 400 onap_rsa ubuntu@ip-172-31-31-191:~$ sudo cp onap_rsa ~/.ssh/ ubuntu@ip-172-31-31-191:~$ sudo chown ubuntu:ubuntu ~/.ssh/onap_rsa # cat your key ssh-keyscan -t rsa gerrit2.ons.zone in the format gerrit2.ons.zone ssh-rsa key # add ~/.ssh/config Host remote-alias gerrit2.ons.zone IdentityFile ~/.ssh/onap_rsa Hostname gerrit2.ons.zone Protocol 2 HostKeyAlgorithms ssh-rsa,ssh-dss StrictHostKeyChecking no UserKnownHostsFile /dev/null # add pub key to gerrit #create user, repo,pw s0 admin 4zZvLiKKHWOvMBeRWZwUR5ls0SpPbgphEpyT1K3KLQ gerrit eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA s2 admin myWWvmVLQfEpIzhGtcXWHKqxtHsSr31DXM4VXmcy1g s4 test clone using admin user git clone "ssh://admin@gerrit3.ons.zone:29418/test" && scp -p -P 29418 admin@gerrit3.ons.zone:hooks/commit-msg "test/.git/hooks/"
Docker compose
The template on https://hub.docker.com/r/gerritcodereview/gerrit has its indentation off for services.gerrit.volumes
sudo curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose docker-compose.yaml version: '3' services: gerrit: image: gerritcodereview/gerrit volumes: - git-volume:/var/gerrit/git - db-volume:/var/gerrit/db - index-volume:/var/gerrit/index - cache-volume:/var/gerrit/cache # added - config-volume:/var/gerrit/etc ports: - "29418:29418" - "8080:8080" volumes: git-volume: db-volume: index-volume: cache-volume: config-volume: ubuntu@ip-172-31-31-191:~$ docker-compose up -d gerrit Starting ubuntu_gerrit_1 ... done todo: missing for replication.config config-volume:/var/gerrit/etc ubuntu@ip-172-31-31-191:~$ docker-compose up -d gerrit Creating network "ubuntu_default" with the default driver Creating volume "ubuntu_config-volume" with default driver Creating ubuntu_gerrit_1 ... done
DI 1: Kubernetes Gerrit Deployment - no HELM
DI 2: Helm Gerrit Deployment
DI 3: Gerrit Replication
# add the remote key to known_hosts ubuntu@ip-172-31-15-176:~$ sudo ssh -i ~/.ssh/onap_rsa ubuntu@gerrit2.ons.zone bash-4.2$ cat /var/gerrit/etc/gerrit.config [gerrit] basePath = git serverId = 872dafaa-3220-4d2c-8f14-a191eec43a56 canonicalWebUrl = http://487707f31650 [database] type = h2 database = db/ReviewDB [index] type = LUCENE [auth] type = DEVELOPMENT_BECOME_ANY_ACCOUNT [sendemail] smtpServer = localhost [sshd] listenAddress = *:29418 [httpd] listenUrl = http://*:8080/ filterClass = com.googlesource.gerrit.plugins.ootb.FirstTimeRedirect firstTimeRedirectUrl = /login/%23%2F?account_id=1000000 [cache] directory = cache [plugins] allowRemoteAdmin = true [container] javaOptions = "-Dflogger.backend_factory=com.google.common.flogger.backend.log4j.Log4jBackendFactory#getInstance" javaOptions = "-Dflogger.logging_context=com.google.gerrit.server.logging.LoggingContext#getInstance" user = gerrit javaHome = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre javaOptions = -Djava.security.egd=file:/dev/./urandom [receive] enableSignedPush = false [noteDb "changes"] autoMigrate = true added [remote "gerrit.ons.zone"] url = admin@gerrit.ons.zone:/some/path/test.git [remote "pubmirror"] url = gerrit.ons.zone:/pub/git/test.git push = +refs/heads/*:refs/heads/* push = +refs/tags/*:refs/tags/* threads = 3 authGroup = Public Mirror Group authGroup = Second Public Mirror Group 20190327 obrienbiometrics:radar michaelobrien$ curl --user gerrit:JfJHDjTgZTT59FWY4KUza6MOvVChtO7dheffqbpLzQ -X POST http://gerrit.ons.zone:8080/config/server/reload Authentication required obrienbiometrics:radar michaelobrien$ curl --digest --user gerrit:JfJHDjTgZTT59FWY4KUza6MOvVChtO7dheffqbpLzQ -X POST http://gerrit.ons.zone:8080/config/server/reload Authentication required obrienbiometrics:radar michaelobrien$ curl --digest --user gerrit:eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA -X POST http://gerrit.ons.zone:8080/config/server/reload Authentication required obrienbiometrics:radar michaelobrien$ curl --digest --user gerrit:eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA -X POST http://gerrit.ons.zone:8080/a/config/server/reload Unauthorizedobrienbiometrics:radar michaelobrien$ curl --user gerrit:eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA -X POST http://gerrit.ons.zone:8080/a/config/server/reload curl: option --uit:eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA: is unknown curl: try 'curl --help' or 'curl --manual' for more information obrienbiometrics:radar michaelobrien$ curl --user gerrit:eMzz9n5lWnnWpGJTqhJcc2Pk/FFfWYRlp9mzvrwnJA -X POST http://gerrit.ons.zone:8080/a/config/server/reload administrate server not permitted obrienbiometrics:radar michaelobrien$ curl --user admin:4zZvLiKKHWOvMBeRWZwUR5ls0SpPbgphEpyT1K3KLQ -X POST http://gerrit.ons.zone:8080/a/config/server/reload )]}' {} curl --user admin:myWWvmVLQfEpIzhGtcXWHKqxtHsSr31DXM4VXmcy1g -X POST http://gerrit2.ons.zone:8080/a/config/server/reload [2019-03-27 03:56:21,778] [HTTP-113] INFO com.google.gerrit.server.config.GerritServerConfigReloader : Starting server configuration reload [2019-03-27 03:56:21,781] [HTTP-113] INFO com.google.gerrit.server.config.GerritServerConfigReloader : Server configuration reload completed succesfully curl --user admin:4zZvLiKKHWOvMBeRWZwUR5ls0SpPbgphEpyT1K3KLQ -X POST http://gerrit.ons.zone:8080/a/config/server/reload no affect obrienbiometrics:gerrit michaelobrien$ sudo ssh -p 29418 admin@gerrit.ons.zone replication list obrienbiometrics:gerrit michaelobrien$ sudo ssh -p 29418 admin@gerrit.ons.zone replication start --all obrienbiometrics:gerrit michaelobrien$ sudo ssh -p 29418 gerrit@gerrit.ons.zone replication start --all startReplication for plugin replication not permitted further obrienbiometrics:gerrit michaelobrien$ sudo ssh -p 29418 admin@gerrit.ons.zone gerrit plugin reload replication fatal: Unable to provision, see the following errors: 1) Error injecting constructor, org.eclipse.jgit.errors.ConfigInvalidException: remote.gerrit2.url "gerrit2.ons.zone:8080/test.git" lacks ${name} placeholder in /var/gerrit/etc/replication.config fix sudo docker exec -it nifty_einstein bash bash-4.2$ vi /var/gerrit/etc/replication.config url = gerrit2.ons.zone:8080/${name}.git ssh -p 29418 admin@gerrit.ons.zone gerrit plugin reload replication [2019-03-28 03:27:02,329] [SSH gerrit plugin reload replication (admin)] INFO com.google.gerrit.server.plugins.PluginLoader : Reloading plugin replication [2019-03-28 03:27:02,507] [SSH gerrit plugin reload replication (admin)] INFO com.google.gerrit.server.plugins.PluginLoader : Unloading plugin replication, version v2.16.6 [2019-03-28 03:27:02,513] [SSH gerrit plugin reload replication (admin)] INFO com.google.gerrit.server.plugins.PluginLoader : Reloaded plugin replication, version v2.16.6 obrienbiometrics:gerrit michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication start --all # need to create the mirror repo first - before replication git clone "ssh://admin@gerrit.ons.zone:29418/test" obrienbiometrics:test michaelobrien$ vi test.sh obrienbiometrics:test michaelobrien$ git add test.sh obrienbiometrics:test michaelobrien$ ls test.sh obrienbiometrics:test michaelobrien$ git status On branch master Your branch is up to date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) new file: test.sh obrienbiometrics:test michaelobrien$ git commit -m "replication test 1" [master ff27d21] replication test 1 1 file changed, 1 insertion(+) create mode 100644 test.sh obrienbiometrics:test michaelobrien$ git commit -s --amend [master 609a5d5] replication test 1 Date: Wed Mar 27 23:54:38 2019 -0400 1 file changed, 1 insertion(+) create mode 100644 test.sh obrienbiometrics:test michaelobrien$ git review Your change was committed before the commit hook was installed. Amending the commit to add a gerrit change id. Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts. remote: Processing changes: refs: 1, new: 1, done remote: remote: SUCCESS remote: remote: New Changes: remote: http://83cfd4a6492e/c/test/+/1001 replication test 1 remote: Pushing to refs/publish/* is deprecated, use refs/for/* instead. To ssh://gerrit.ons.zone:29418/test * [new branch] HEAD -> refs/publish/master [2019-03-28 03:55:17,606] [ReceiveCommits-1] INFO com.googlesource.gerrit.plugins.hooks.HookFactory : hooks.commitReceivedHook resolved to /var/gerrit/hooks/commit-received [CONTEXT RECEIVE_ID="test-1553745317588-f20ce7db" ] [2019-03-28 03:55:17,985] [ReceiveCommits-1] INFO com.googlesource.gerrit.plugins.hooks.HookFactory : hooks.patchsetCreatedHook resolved to /var/gerrit/hooks/patchset-created [CONTEXT RECEIVE_ID="test-1553745317588-f20ce7db" ] in gerrit +2 and merge [2019-03-28 03:56:53,148] [HTTP-232] INFO com.googlesource.gerrit.plugins.hooks.HookFactory : hooks.commentAddedHook resolved to /var/gerrit/hooks/comment-added [2019-03-28 03:57:06,388] [HTTP-240] INFO com.googlesource.gerrit.plugins.hooks.HookFactory : hooks.submitHook resolved to /var/gerrit/hooks/submit [CONTEXT SUBMISSION_ID="1001-1553745426374-726360d5" ] [2019-03-28 03:57:06,512] [HTTP-240] INFO com.googlesource.gerrit.plugins.hooks.HookFactory : hooks.changeMergedHook resolved to /var/gerrit/hooks/change-merged [CONTEXT SUBMISSION_ID="1001-1553745426374-726360d5" ] obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication list Remote: gerrit2 Url: gerrit2.ons.zone:8080/${name}.git verifying bash-4.2$ vi /var/gerrit/etc/replication.config [remote "gerrit2"] url = gerrit2.ons.zone:/${name}.git push = +refs/heads/*:refs/heads/* push = +refs/tags/*:refs/tags/* threads = 3 authGroup = Public Mirror Group authGroup = Second Public Mirror Group tried both obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication list Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts. Remote: gerrit2 Url: git@gerrit2.ons.zone:/${name}.git obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication start --all --now Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts. obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone gerrit plugin reload replication Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts. obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication list Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts. Remote: gerrit2 Url: gerrit2@gerrit2.ons.zone:/${name}.git obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication start --all --now Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts. set debug obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone gerrit logging set DEBUG ssh -p 29418 admin@gerrit.ons.zone gerrit logging set reset stopped obrienbiometrics:test michaelobrien$ ssh -p 29418 admin@gerrit.ons.zone replication start --wait --all Warning: Permanently added '[gerrit.ons.zone]:29418,[13.58.152.222]:29418' (ECDSA) to the list of known hosts. [2019-03-28 04:43:12,056] [SSH replication start --wait --all (admin)] ERROR com.google.gerrit.sshd.BaseCommand : Internal server error (user admin account 1000000) during replication start --wait --all org.apache.sshd.common.channel.exception.SshChannelClosedException: flush(ChannelOutputStream[ChannelSession[id=0, recipient=0]-ServerSessionImpl[admin@/207.236.250.131:4058]] SSH_MSG_CHANNEL_DATA) length=0 - stream is already closed at org.apache.sshd.common.channel.ChannelOutputStream.flush(ChannelOutputStream.java:174) tested replication to gitlab - pull ok https://gitlab.com/obriensystems/test
Links
https://kubernetes.io/docs/concepts/storage/storage-classes/