Note, this page is based on the SDNC DB (MySQL) Clustered Deployment Wiki page as created by Rahul Sharma.
What's Desired
In order to replicate a MySQL database within APPC, follow these steps:
- Have a Master and at least 1 Slave.
- Reuse built-in MySQL data replication for replicating data between the Master and the Slaves.
- Use the Kubernetes scaling mechanism to scale the pods.
How Kubernetes made Clustered Deployments Possible
- StatefulSet:
- Used to manage Stateful applications.
- Guarantees fixed numbering for a POD.
- Using headless service, PODs were registered with their own unique FQDN in DNS; this makes it possible for other PODs to find a POD even after a restart (albeit with a different IPAddress).
- Since we were using a single Kubernetes VM, hosting a volume dynamically on the local-store VM for newly spun slaves was not straight-forward (support wasn't inbuilt). However, Kubernetes does support writing external provisioners which did the job for us. This provisioner actually created a virtual NFS Server on top of a local store. The instance of nfs-provisioner will watch for
PersistentVolumeClaims
that ask for theStorageClass
and automatically create NFS-backedPersistentVolumes
for them.
We used this Kubernetes example to replicate MySQL server; this was modified to suit the needs for APPC DB.
Internals
For each MYSQL POD, 2 init-containers and 2 containers are spawned:
- 2 init containers:
- init-mysql
- Generates special MySQL config files based on an Ordinal index. (the ordinal index is saved in server-id.cnf)
- Uses config-map to copy the master.cnf/slave.cnf files to the conf.d directory.
- clone-mysql:
- Hot backup the data from previous MySQL pod using binlog which is powered by Percona xtrabackup (https://www.percona.com/software/mysql-database/percona-xtrabackup)
- init-mysql
- 2 containers:
- mysqld:
- MySQL server
- Xtrabackup sidecar:
- Handles all of the replication between this server and the Master.
- Handles requests from other Pods for data cloning.
- mysqld:
2 Services are created:
- DBHost should be used for any write operation (writes to master)
- DBHost-Read should be used for read operations (can query Slaves besides master)
As mentioned above, nfs-provisioner was used to dynamically create Persistent Volume Claims to enable dynamic scaling of slaves.
Master Failure
Unfortunately, if a master fails, then we need to write a script (or an application) to promote one of the slaves to be the master and instruct the other slaves and applications to change to the new master. You can see more details here.
The other option is to use GTID-based replication.
Advantages
- Can have multiple Slaves with a Master server.
- Allows scaling slaves dynamically.
- Any data-write is done to Master, but a data-read can happen on Slaves as well. Hence a 'DBHost-Read' Service was introduced which should be used by Clients for data-fetch operations.
- For any write operation, the write service DBHost can be used.
- Once a Slave is replicated from Master, that Slave is then used to replicate data on any new Slave; this has a low impact on the Master server.
Examples:
Running a MySQL client to create the DB and Table and fetch it using the DBHost-Read service:
To demonstrate that DBHost-read distributes the service across slaves, see the ServerID changing in its response
Can scale (up or down) MySQL dynamically: