Manual mount volume
Persistence: manually add the volume part in deployment, NFS mode
Manual mount volumespec: containers: - image: hub.baidubce.com/duanshuaixing/tools:v3 imagePullPolicy: IfNotPresent name: test-volume resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /root/ name: nfs-test dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: nfs-test nfs: path: /dockerdata-nfs/test-volume/ server: 10.0.0.7
- Restart the node to check the nfs automount
Restart node and see whether nfs client auto-mounts nfs or not, if not, you should munually mount it.
df -Th |grep nfs
sudo mount $MASTER_IP:/dockerdata-nfs /dockerdata-nfs/
Reinstall One Project
Manual mount volume1、Delete a module(Take so as an example) helm delete dev-so --purge 2、 If delete failed, you can manually delete pvc、pv、deployment、configmap、statefulset、job 3、Install a module cd oom/kubernests make so meke onap helm install local/so --namespace onap --name dev-so or(under the circumstance that use docker proxy repository) helm install local/so --namespace onap --name dev-so --set global.repository=172.30.1.66:10001 Use a proxy repository when installing a module or define a mirror policy for a module helm install local/so --namespace onap --name dev-so --set global.repository=172.30.1.66:10001 --set so.pullPolicy=IfNotPresent 4、Clear /dockerdata-nfs/dev-so file( can mv to /bak directory)
- Helm hasn't deploy parameter
helm has no deploy parameter problem
cp -R ~/oom/kubernetes/helm/plugins/ ~/.helm/
Helm list show no release
cp /root/oom/kubernetes/onap/values.yaml /root/integration-override.yaml
helm deploy dev local/onap -f /root/oom/kubernetes/onap/resources/environments/public-cloud.yaml -f /root/integration-override.yaml --namespace onap --verbose
- Forced to delete all pods
$(kubectl get pod -n onap |awk '{print $1}') -n onap --grace-period=0 --force - Copy file to pod
Copy from local to pod, problem about specifying the path
This can be temporarily resolved by installing the LRZSZ command, or by executing the docker cp command within the node
Check the port exposed by the pod
General
Content
Integrations