Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Here's a rough summary of the sequence of operations between DCM, rsync and clusters:

  1. Create/Apply CSR (like other resources)
  2. Approve CSR (new via /subresources/approval)
    1. K8s signer will issue a certificate some time after approval of CSR takes place
  3. Watch/monitor CSR to see when a .status is created
  4. Return signed certificate obtained from CSR .status.certificate all the way back to etcd
  5. DCM will read the certificate from etcd

See also: Sequence Diagram


With regards to DCM obtaining the signed user certificate per cluster (mostly point #5 above), for now it will be based on lazy-loading the certificates from etcd into MongoDB whenever the user requests a kubeconfig to be generated for the logical cloud cluster.

Thus, when attempting to retrieve the kubeconfig for a particular logical cloud cluster (after the user/client requests a kubeconfig via DCM API), there are 3 possible states and actions that will take place:

  1. If rsync has already written the signed certificate, DCM will copy the certificate to MongoDB, then generate the kubeconfig, then reply with the generated kubeconfig and HTTP 200
  2. If rsync has not yet written the signed certificate, DCM will not generate a kubeconfig and instead respond with an HTTP 202 informing the client that the request was accepted but data is not ready yet
    • Client should repeat the request until the 1st possibility
  3. If the client asks for the kubeconfig again, DCM will simply re-generate that kubeconfig using the certificate in MongoDB, skipping etcd altogether (so will be faster than when lazy loading takes place) and return it.

Security considerations:

  • The existing user's private key (which was originally used to sign the CSRs for each cluster) has been previously generated by DCM locally and stored in and only in MongoDB
  • The private key does not leave MongoDB at any point until:
    • after the kubeconfig has been fully generated and returned back to the user/client as an HTTP response (the private key is in the kubeconfig's user's client-key-data field as a base64-encoded string).
  • The certificates issued by the clusters are transported from the cluster, to rsync, and back to DCM over etcd, which then stores them in MongoDB as well.
  • A high-security implementation could leverage the Trusted Platform Module (TPM) present in most edge deployments to store the private key and sign/encrypt data without exposing this key. However, it's unclear what this would look like when applied to a kubeconfig.


Currently, DCM expects the user certificate (in base64) to be stored by rsync in the following etcd AppContext path (lc1 is the name of the logical cloud):

/context/<ID>/app/logical-cloud/cluster/<cluster-reference>/resource/lc1+cert/

However, this is likely to change depending on the non-monitor rsync side of the implementation.  Also, this conflates K8s resources with non-K8s resources (lc1+cert above is a raw certificate).

Additional decisions/clarifications about current DCM's approach for getting certificate:

  • Rsync will use client-go to ask for csr approval after iterating through /subresources (just /subresource/approval for now)
  • New Monitor code watches CSRs, which copies the certs to the ResourceBundleState
  • DCM lazy-checks for the certs in etcd starting with the 1st time they are ever needed (getting a kubeconfig) - as explained above
    • Certs not yet in etcd= Logical Cloud is still applying
    • Certs already in etcd = Logical Cloud is applied, everything got stored in mongodb and it’s ready to return kubeconfigs
      • no more lazy-checks are done after this point because all info needed is now in mongodb



With regards to monitor work:

To see what the monitor sees:

kubectl logs monitor-755db946d8-n2w2m

To check the resourcebundlestate:

kubectl get resourcebundlestate

  • No labels