Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 19 Next »


Gerrit


Replication

https://gerrit.googlesource.com/plugins/replication/+doc/master/src/main/resources/Documentation/config.md


Helm Charts

Or get the yamls via 

https://gerrit.googlesource.com/k8s-gerrit/

not

https://github.com/helm/charts/tree/master/stable

Triage

following https://gerrit.googlesource.com/k8s-gerrit/+/master/helm-charts/gerrit-master/

https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner

https://github.com/helm/charts/blob/master/stable/nfs-server-provisioner/values.yaml

(look at https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client)

on vm2 in ~/google

sudo cp gerrit-master/values.yaml .
sudo vi values.yaml 
# added hostname, key, cert

sudo helm install ./gerrit-master -n gerrit-master -f values.yaml 
NAME:   gerrit-master
LAST DEPLOYED: Wed Mar 20 19:03:40 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME                                       TYPE    DATA  AGE
gerrit-master-gerrit-master-secure-config  Opaque  1     0s
==> v1/ConfigMap
NAME                                   DATA  AGE
gerrit-master-gerrit-master-configmap  2     0s
==> v1/PersistentVolumeClaim
NAME                                  STATUS   VOLUME          CAPACITY  ACCESS MODES  STORAGECLASS  AGE
gerrit-master-gerrit-master-logs-pvc  Pending  default         0s
gerrit-master-gerrit-master-db-pvc    Pending  default         0s
gerrit-master-git-gc-logs-pvc         Pending  default         0s
gerrit-master-git-filesystem-pvc      Pending  shared-storage  0s
==> v1/Service
NAME                                 TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)       AGE
gerrit-master-gerrit-master-service  NodePort  10.43.111.61  <none>       80:31329/TCP  0s
==> v1/Deployment
NAME                                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
gerrit-master-gerrit-master-deployment  1        1        1           0          0s
==> v1beta1/CronJob
NAME                  SCHEDULE      SUSPEND  ACTIVE  LAST SCHEDULE  AGE
gerrit-master-git-gc  0 6,18 * * *  False    0       <none>         0s
==> v1beta1/Ingress
NAME                                 HOSTS            ADDRESS  PORTS  AGE
gerrit-master-gerrit-master-ingress  s2.onap.info  80       0s
==> v1/Pod(related)
NAME                                                     READY  STATUS   RESTARTS  AGE
gerrit-master-gerrit-master-deployment-7cb7f96767-xz45w  0/1    Pending  0         0s
NOTES:
A Gerrit master has been deployed.
==================================
Gerrit may be accessed under: s2.onap.info


kubectl get pvc --all-namespaces
NAMESPACE   NAME                                   STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS     AGE
default     gerrit-master-gerrit-master-db-pvc     Pending                                       default          4m
default     gerrit-master-gerrit-master-logs-pvc   Pending                                       default          4m
default     gerrit-master-git-filesystem-pvc       Pending                                       shared-storage   4m
default     gerrit-master-git-gc-logs-pvc          Pending                                       default          4m

kubectl describe pod gerrit-master-gerrit-master-deployment-7cb7f96767-xz45w -n default
Events:
  Type     Reason            Age               From               Message
  ----     ------            ----              ----               -------
  Warning  FailedScheduling  4s (x17 over 2m)  default-scheduler  pod has unbound PersistentVolumeClaims

# evidently missing nfs dirs
ubuntu@bell2:~/google$ sudo helm list
NAME         	REVISION	UPDATED                 	STATUS  	CHART              	NAMESPACE
gerrit-master	1       	Wed Mar 20 19:03:40 2019	DEPLOYED	gerrit-master-0.1.0	default  

ubuntu@bell2:~/google$ sudo helm delete gerrit-master --purge
release "gerrit-master" deleted



# via https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner
sudo helm install stable/nfs-server-provisioner --name nfs-server-prov
NAME:   nfs-server-prov
LAST DEPLOYED: Wed Mar 20 19:31:04 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME                                      READY  STATUS             RESTARTS  AGE
nfs-server-prov-nfs-server-provisioner-0  0/1    ContainerCreating  0         0s
==> v1/StorageClass
NAME  PROVISIONER                                           AGE
nfs   cluster.local/nfs-server-prov-nfs-server-provisioner  0s
==> v1/ServiceAccount
NAME                                    SECRETS  AGE
nfs-server-prov-nfs-server-provisioner  1        0s
==> v1/ClusterRole
NAME                                    AGE
nfs-server-prov-nfs-server-provisioner  0s
==> v1/ClusterRoleBinding
NAME                                    AGE
nfs-server-prov-nfs-server-provisioner  0s
==> v1/Service
NAME                                    TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)                                 AGE
nfs-server-prov-nfs-server-provisioner  ClusterIP  10.43.249.72  <none>       2049/TCP,20048/TCP,51413/TCP,51413/UDP  0s
==> v1beta2/StatefulSet
NAME                                    DESIRED  CURRENT  AGE
nfs-server-prov-nfs-server-provisioner  1        1        0s
NOTES:
The NFS Provisioner service has now been installed.
A storage class named 'nfs' has now been created
and is available to provision dynamic volumes.
You can use this storageclass by creating a `PersistentVolumeClaim` with the
correct storageClassName attribute. For example:
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-dynamic-volume-claim
    spec:
      storageClassName: "nfs"
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi


default         nfs-server-prov-nfs-server-provisioner-0   1/1       Running     0          1m


kubectl describe pvc gerrit-master-gerrit-master-db-pvc 
Events:
  Warning  ProvisioningFailed  13s (x6 over 1m)  persistentvolume-controller  storageclass.storage.k8s.io "default" not found


# set creation to true under storageClass (default and shared)
create: true


# further
 ExternalProvisioning  3s (x2 over 18s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "nfs" or manually created by system administrator




# need to do a detailed dive into SC provisioners


# i have this unbound PVC - because I have not created the NFS share yet via the prov
default     gerrit-master-git-filesystem-pvc       Pending                                       shared-storage   2m
# want this bound PVC+PV
inf         gerrit-var-gerrit-review-site      Bound     pvc-6d2c642b-c278-11e8-8679-f4034344e778   6Gi        RWX           nfs-sc   174d
pvc-6d2c642b-c278-11e8-8679-f4034344e778   6Gi        RWX           Delete          Bound     inf/gerrit-var-gerrit-review-site      nfs-sc             174d


Jenkins

Nexus

GoCD

GitLab


Baseline Testing

Verify your environment by installing the default mysql chart

ubuntu@ip-172-31-3-87:~$ sudo helm install --name mysqldb --set mysqlRootPassword=myrootpass,mysqlUser=myuser,mysqlPassword=mypass,mysqlDatqbase=mydb stable/mysq
NAME:   mysqldb
LAST DEPLOYED: Thu Mar 21 16:06:02 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME          DATA  AGE
mysqldb-test  1     0s
==> v1/PersistentVolumeClaim
NAME     STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mysqldb  Pending  0s
==> v1/Service
NAME     TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
mysqldb  ClusterIP  10.43.186.39  <none>       3306/TCP  0s
==> v1beta1/Deployment
NAME     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mysqldb  1        1        1           0          0s
==> v1/Pod(related)
NAME                     READY  STATUS   RESTARTS  AGE
mysqldb-979887bcf-4hf59  0/1    Pending  0         0s
==> v1/Secret
NAME     TYPE    DATA  AGE
mysqldb  Opaque  2     0s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysqldb.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysqldb -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysqldb -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysqldb 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
   


Links

https://kubernetes.io/docs/concepts/storage/storage-classes/



Baseline Testing

Verify your environment by installing the default mysql chart

ubuntu@ip-172-31-3-87:~$ sudo helm install --name mysqldb --set mysqlRootPassword=myrootpass,mysqlUser=myuser,mysqlPassword=mypass,mysqlDatqbase=mydb stable/mysq
NAME:   mysqldb
LAST DEPLOYED: Thu Mar 21 16:06:02 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME          DATA  AGE
mysqldb-test  1     0s
==> v1/PersistentVolumeClaim
NAME     STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mysqldb  Pending  0s
==> v1/Service
NAME     TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
mysqldb  ClusterIP  10.43.186.39  <none>       3306/TCP  0s
==> v1beta1/Deployment
NAME     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mysqldb  1        1        1           0          0s
==> v1/Pod(related)
NAME                     READY  STATUS   RESTARTS  AGE
mysqldb-979887bcf-4hf59  0/1    Pending  0         0s
==> v1/Secret
NAME     TYPE    DATA  AGE
mysqldb  Opaque  2     0s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysqldb.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysqldb -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysqldb -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysqldb 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
   

DevOps

Kubernetes Cluster Install

Follow RKE setup OOM RKE Kubernetes Deployment#Quickstart

Kubernetes Services

Namespaces

Create a specific namespace

https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/

vi namespace-dev.json
{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "dev",
    "labels": {
      "name": "dev"
    }
  }
}
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f namespace-dev.json 
namespace/dev created
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get namespaces --show-labels
NAME            STATUS    AGE       LABELS
default         Active    5d        <none>
dev             Active    44s       name=dev

Contexts

ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config set-context dev --namespace=dev --cluster=local --user=local
Context "dev" created.
ubuntu@ip-172-31-30-234:~/helm/book$ sudo kubectl config use-context dev
Switched to context "dev".


Storage

Volumes

https://kubernetes.io/docs/concepts/storage/volumes/

hostPath

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

Persistent Volumes

kind: PersistentVolume
apiVersion: v1
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/home/ubuntu/tools-data1"

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f hostpath-volume.yaml -n dev
persistentvolume/task-pv-volume created
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pv 
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
task-pv-volume   5Gi        RWO            Retain           Available             manual                   2m



Persistent Volume Claims

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f hostpath-pvc.yaml -n dev
persistentvolumeclaim/task-pv-claim created


# check bound status
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pv 
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM               STORAGECLASS   REASON    AGE
task-pv-volume   5Gi        RWO            Retain           Bound     dev/task-pv-claim   manual                   7m

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pvc -n dev
NAME            STATUS    VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
task-pv-claim   Bound     task-pv-volume   5Gi        RWO            manual         1m


vi pv-pod.yaml


kind: Pod
apiVersion: v1
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
       claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage

ubuntu@ip-172-31-30-234:~/helm/book$ kubectl create -f pv-pod.yaml -n dev
pod/task-pv-pod created


ubuntu@ip-172-31-30-234:~/helm/book$ kubectl get pods -n dev
NAME          READY     STATUS    RESTARTS   AGE
task-pv-pod   1/1       Running   0          53s


# test
ubuntu@ip-172-31-30-234:~$ vi tools-data1/index.html
ubuntu@ip-172-31-30-234:~/helm/book$ kubectl exec -it task-pv-pod -n dev bash
root@task-pv-pod:/# apt-get update; 
apt-get install curl
root@task-pv-pod:/# curl localhost
hello world


Storage Classes

https://kubernetes.io/docs/concepts/storage/storage-classes/

Design Issues

DI 0: Raw Docker Gerrit Container for reference - default H2

https://gerrit.googlesource.com/docker-gerrit/

sudo docker run -ti -d -p 8080:8080 -p 29418:29418 gerritcodereview/gerrit
ubuntu@ip-172-31-15-176:~$ sudo docker ps
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS                                              NAMES
83cfd4a6492e        gerritcodereview/gerrit   "/bin/sh -c 'git con…"   3 minutes ago       Up 3 minutes        0.0.0.0:8080->8080/tcp, 0.0.0.0:29418->29418/tcp   nifty_einstein
# check http://localhost:8080

#create user, repo,pw
admin
4zZvLiKKHWOvMBeRWZwUR5ls0SpPbgphEpyT1K3KLQ
ubuntu@ip-172-31-15-176:~$ git clone "http://admin@localhost:8080/a/test"
Cloning into 'test'...
Password for 'http://admin@localhost:8080': 
remote: Counting objects: 2, done
remote: Finding sources: 100% (2/2)
Unpacking objects: 100% (2/2), done.
remote: Total 2 (delta 0), reused 0 (delta 0)
Checking connectivity... done.


DI 1: Kubernetes Gerrit Deployment - no HELM


DI 2: Helm Gerrit Deployment

DI 3: Gerrit Replication

https://gerrit.googlesource.com/plugins/replication/+doc/master/src/main/resources/Documentation/config.md

# add the remote key to known_hosts
ubuntu@ip-172-31-15-176:~$ sudo ssh -i ~/.ssh/onap_rsa ubuntu@gerrit2.ons.zone



bash-4.2$ cat /var/gerrit/etc/gerrit.config
[gerrit]
	basePath = git
	serverId = 872dafaa-3220-4d2c-8f14-a191eec43a56
	canonicalWebUrl = http://487707f31650
[database]
	type = h2
	database = db/ReviewDB
[index]
	type = LUCENE
[auth]
	type = DEVELOPMENT_BECOME_ANY_ACCOUNT
[sendemail]
	smtpServer = localhost
[sshd]
	listenAddress = *:29418
[httpd]
	listenUrl = http://*:8080/
	filterClass = com.googlesource.gerrit.plugins.ootb.FirstTimeRedirect
	firstTimeRedirectUrl = /login/%23%2F?account_id=1000000
[cache]
	directory = cache
[plugins]
	allowRemoteAdmin = true
[container]
	javaOptions = "-Dflogger.backend_factory=com.google.common.flogger.backend.log4j.Log4jBackendFactory#getInstance"
	javaOptions = "-Dflogger.logging_context=com.google.gerrit.server.logging.LoggingContext#getInstance"
	user = gerrit
	javaHome = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre
	javaOptions = -Djava.security.egd=file:/dev/./urandom
[receive]
	enableSignedPush = false
[noteDb "changes"]
	autoMigrate = true


added
[remote "gerrit.ons.zone"]
    url = admin@gerrit.ons.zone:/some/path/test.git
[remote "pubmirror"]
    url = gerrit.ons.zone:/pub/git/test.git
    push = +refs/heads/*:refs/heads/*
    push = +refs/tags/*:refs/tags/*
    threads = 3
    authGroup = Public Mirror Group
    authGroup = Second Public Mirror Group



Links

https://kubernetes.io/docs/concepts/storage/storage-classes/

  • No labels