Introduction
This tutorial is going to explain how to setup a local Kubernetes cluster and minimal helm setup to run and deploy SDC (but can be extended to several/all ONAP components) on a single host.
...
This is not meant for production obviously, and the tweaks that are done to the OOM/K8s setup are most likely going to evolve with further releases.
Minimum Requirements
- One VM running Ubuntu 20.04 LTS (should also work on 18.04), with internet access to download charts/containers and the oom repo
- Root/sudo privileges
- Sufficient RAM depending on how many components you want to deploy
- Around 20G of RAM allows for a few components, the minimal setup for SDC is enabling :
- Shared Cassandra
- AAF
- Portal (if you need UI access)
- SDC
- This was tested with a huge VM - 128G of RAM and 12 VCPU, running most of the components in Honolulu development.
- This was tested with a small VM to run components on a local Laptop (need enough ram to create a 20G Ram VM using VBox, VMWare...)
- Around 20G of RAM allows for a few components, the minimal setup for SDC is enabling :
- above 160G available storage should be sufficient, mostly depends on how many components you want to enable in OOM charts.
- Storage is required mostly to store container images
Overall Procedure
- Install/remove Microk8s with appropriate version
- Install/remove Helm with appropriate version
- Tweak Microk8s
- Download oom repo
- Install the needed Helm plugins
- Install docker (now needed to build oom charts)
- Install ChartMuseum as a local helm repo
- Build all oom charts and store them in the chart repo
- Tweak oom override file to fine tune deployment based on your VM capacity and component needs
- Deploy charts
- Enable UI access
1) Install/Upgrade Microk8s with appropriate version
Why Microk8s ?
Microk8s is a bundled lightweight version of kubernetes maintained by Canonical, it has the advantage to be well integrated with snap on Ubuntu, which makes it super easy to manage/upgrade/work with
...
You need to select the appropriate version to install, to see all possible version do :
Code Block |
---|
sudo snap info microk8s |
this tutorial is focused on Honolulu release so we will use k8s version 1.19, to do so, you just need to select the appropriate channel
Code Block |
---|
sudo snap install microk8s --classic --channel=1.19/stable |
You may need to change your firewall configuration to allow pod to pod communication and pod to internet communication :
Code Block |
---|
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed |
Addons ?
MicroK8s is lightweight but comes with several possible addons, OOM and ONAP requires just a few to be enabled, you can choose to enable more if you want to
DNS addon : we need the DNS addon so that pods can 'see' each other by host name.
Storage addon : we will enable the default Host storage class, this allows local volume storage that are used by some pods to exchange folders between containers.
Code Block |
---|
microk8s enable dns storage |
that's it, you should have a running k8s cluster, ready to host ONAP pods
I recommend to get familiar with microk8s, here are a few useful commands but you can read more on the microk8s website :
- microk8s status: Provides an overview of the MicroK8s state (running / not running) as well as the set of enabled addons
- microk8s enable: Enables an addon
- microk8s disable: Disables an addon
- microk8s kubectl: Interact with kubernetes
- microk8s config: Shows the kubernetes config file
- microk8s inspect: Performs a quick inspection of the MicroK8s intallation
- microk8s reset: Resets the infrastructure to a clean state → very useful for a dev lab
- microk8s stop: Stops all kubernetes services
- microk8s start: Starts MicroK8s after it is being stopped
2) Install/remove Helm with appropriate version
Helm is the package manager for k8s, we require a specific version for each ONAP release, the best is to look at the OOM guides to see which one is required (link to add)
For the Honolulu release we need Helm 3 - A significant improvement with Helm3 is that it does not require a specific pod running in the kubernetes cluster (no more Tiller pod)
As Helm is self contained, it's pretty straightforward to install/upgrade
I recommend putting helm in local bin folder as a softlink, this way it's easy to switch between versions if you need to
Code Block |
---|
wget https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz
tar -zxvf helm-v3.5.2-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm-v3.5.2
sudo ln -s /usr/local/bin/helm-v3.5.2 /usr/local/bin/helm
sudo rm /usr/local/bin/helm
sudo ln -s /usr/local/bin/helm-v3.5.2 /usr/local/bin/helm |
3) Tweak Microk8s
The below tweaks are not strictly necessary, but they help in making the setup more simple and flexible.
A) Increase the max number of pods
As ONAP may deploy a significant amount of pods, we need to inform kubelet to allow more than the basic configuration (as we plan an all in box setup), if you only plan to run a limited number of components, this is not need
to change the max number of pods, we need to add a parameter to the startup line of kubelet
edit the file located at
Code Block |
---|
/var/snap/microk8s/current/args/kubelet |
add the following line at the end :
Code Block |
---|
--max-pods=250 |
save the file and restart kubelet to apply the change :
Code Block |
---|
sudo service snap.microk8s.daemon-kubelet restart |
B) run a local copy of kubectl
Microk8s comes bundled with kubectl, you can interact with it by doing:
Code Block |
---|
microk8s kubectl describe node |
to make things simpler, let's install a local copy of kubectl so we can use it to interact with the kubernetes cluster in a more straightforward way
We need kubectl 1.19 to match the cluster we have installed
Code Block |
---|
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.7/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl |
If you want to use a the same link trick as for helm (see above) this could allow you to switch between kubectl versions if needed
Now we need to provide kubectl with a proper config file so that it can access the cluster, microk8s allows to retrive the cluster config very easily
Simply create a .kube folder in your home directory and dump the config there
Code Block |
---|
cd
mkdir .kubecd .kube
microk8s.config > config
chmod 700 config |
the last line is there to avoid helm complaining about too open permission
you should now have helm and kubectl ready to interact with each other, you can verify this by trying :
Code Block |
---|
kubectl version |
this should output both the local client and server version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.7-34+02d22c9f4fb254", GitCommit:"02d22c9f4fb2545422b2b28e2152b1788fc27c2f", GitTreeState:"clean", BuildDate:"2021-02-11T20:13:16Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}