SDN-C Site Recovery
Casablanca
Support for failover in catastrophic situations was first available in Casablanca.
Overview
After a geo-redundant site has failed entirely and a failover activity has been completed, the original site may be recovered and joined back into the SDN-C deployment using this procedure.
Procedure
This is meant for lab systems as there may be inconsistencies, so make sure to check the site roles and health afterwards to ensure that everything is fine.
In an ONAP lab environment, in order to get both sites back into a Geo Redundant pair of two clusters, Helm upgrade needs to be run on both sides with geoEnabled=true
:
Helm upgrade
helm upgrade --set sdnc.config.geoEnabled=true --recreate-pods dev local/onap --namespace onap
On the primary Kubernetes master, make the local site active:
sdnc.makeActive
ubuntu@k8s-master:~/oom/kubernetes/sdnc/resources/geo/bin$ ./sdnc.makeActive dev
Forcing prom site sdnc01 to become active
prom site sdnc01 should now be active
On the primary Kubernetes master, switch voting to the local site:
switchVoting.sh
ubuntu@k8s-master:~/oom/kubernetes/sdnc/resources/geo/bin$ ./switchVoting.sh primary
success
ubuntu@k8s-master:~/oom/kubernetes/sdnc/resources/geo/bin$
Troubleshooting
After the upgrade, there may be issues that need to be manually resolved on the site that suffered the catastrophic failure.
Null MUSIC Pointer
Null pointers may end up in the replicas table in the MUSIC cluster. If this occurs, they should be deleted.
Remove replica information from the MUSIC database:
Remove replica data
Note: The MUSIC server location can be found in oom/kubernetes/sdnc/charts/prom/values.yaml: Values.config.musicLocation
.
Then delete the PROM pod, which will result in Kubernetes recreating it:
Delete PROM pod
Consul Server Entry
The new Consul Server may still have an entry for the previous instance of the consul pod.
Delete the Consul pod, which will result in Kubernetes recreating it: