top of page
  • Writer's pictureJD Wallace

Kubernetes Cluster Setup with Kubeadm

Updated: Dec 21, 2020


Ok, let me start by saying I'm pretty new to Kubernetes, containers, and honestly just plain Linux administration too. So when Pure Storage announced plans to acquire Portworx, the leading Kubernetes data services platform, I knew it was time to hit the lab. Now that you know I only started this journey less than three months ago, approach this post not as the writings of an expert, but of a student. I'm going to share what I know based on where I am. There are likely to be mistakes, inefficiencies, and poor assumptions; I'm learning. If you have some feedback to share I want to hear it (as long as it's shared with kindness please). Reply here or find me @jdwallace.


Where to Start / Why Kubeadm

The first thing I discovered when starting out is that it's incredibly challenging to know where to start. I've tried Rancher, kind, minikube, Kubespray, and Kubeadm so far; and each has its place. Where I've landed for now is Kubeadm for a few reasons.

  • I tried Rancher first. It's attractive because it has a nice graphical UI which is approachable for folks who are new all this. Ultimately however I determined it was a little too specialized to help me learn basics without getting distracted by the Rancher bits.

  • Some tools (kind and minikube) seem to be tailored for developers who want to "simulate" a cluster on a laptop for dev purposes. I'm more interested in learning about the infrastructure than testing code and I have a home lab with plenty of resources.

  • Kubespray is very powerful for automating a deployment, and ensuring consistency and repeatability. I expect that eventually this will become my primary method, but for now it has dependencies on a few other tools I'm not super familiar with yet, and it's so automated that it takes a little bit of the learning out of my hands.

  • Kubeadm is easy to use, and results in a clean multi-node "production" quality cluster without a lot of extra stuff. It's also really easy to see how and where to make changes if I want to try something different. It's perfect for where I am right now, and it's what I will document in this post.

Prereqs

The requirements here are pretty basic. I have a vSphere home lab with 3 ESXi hosts. I'm just going to be using local storage on each node for now. I believe this walkthrough should be easily modifiable to run on any hypervisor or even bare metal setup that is powerful enough to support 4 linux VMs with 2-4 CPU, 4GB memory, and about 60GB of storage each.


Creating a Linux VM Template

Having a vSphere VM template that I can use to quickly crank out 4 identical Ubuntu VMs with most of the basic server setup already handled is what really streamlines this process. This is already expertly documented by Myles Gray over on his blog blah.cloud; so just go follow his steps and come back. I will point out that while Myles' writeup is based on Ubuntu 18.04 LTS, I've followed the exact same procedure with Ubuntu 20.04 LTS with great results.


Additionally, I will be using the term "controller" for my controller node and not "master." When following Myles' instructions I encourage you to make that same adjustment.



Deploy Four Ubuntu 20.04 LTS VMs

If you followed Miles' post, you should now have 4 VMs ready to go and can skip down to "Install Additional Tools and Updates."


Manual Deployment - Additional Setup Steps

You can certainly just deploy these VMs manually if you prefer, but if you do make sure you've performed these additional steps on each VM.


[Note] These steps are not required if you deployed templates based on the "Creating a Linux VM Template" section.


Deploy your public key for password-less ssh login.

ssh-copy-id ubuntu@k8s-controller

Install open-vm-tools.

sudo apt update
sudo apt install open-vm-tools -y

Disable swap.

sudo swapoff --all
sudo sed -ri '/\sswap\s/s/^#?/#/'/etc/fstab

Install Additional Tools and Updates

Install all of the tools we'll need and check for latest updates.

sudo apt update
sudo apt install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common \
    nfs-common \
    open-iscsi \
    multipath-tools
sudo apt upgrade -y
sudo apt autoremove -y

[optional] Set static IP addresses. I've created clusters with DHCP, but sometimes I like to make sure I have consistent networking.

sudo vi /etc/netplan/99-netcfg-vmware.yaml
# Generated by VMWare customization engine.
network:
  version: 2
  renderer: networkd
  ethernets:
    ens192:
      dhcp4: no
      addresses:
        - 192.168.1.231/24
      gateway4: 192.168.1.1
      dhcp6: yes
      dhcp6-overrides:
        use-dns: false
      nameservers:
        search:
          - homelab.local
        addresses:
          - 192.168.1.200
          - 192.168.1.201
sudo netplan apply

[optional] Configure iSCSI. I'll be using Portworx with iSCSI volumes on FlashArray. If you don't plan to use iSCSI storage you can certainly skip this part.


Restart the iscsid service. If you created your VMs from template this will force a new IQN to be created.

systemctl restart iscsid.service

View the current iSCSI IQN.

sudo cat /etc/iscsi/initiatorname.iscsi

Verify this value is unique for each VM. If not you can edit it; for example:

InitiatorName=iqn.1993-08.org.debian:01:k8s-controller

Create a Backup

At this point I shutdown all of the nodes and take a backup. If anything goes wrong with the cluster deployment I have a nice rollback point.


Install Docker on All Nodes


Add Docker GPG key.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Setup Stable Repo.

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
sudo apt update

Install Docker Engine (Latest Version)

sudo apt install -y docker-ce docker-ce-cli containerd.io

Enable docker management as a non-root user.

sudo usermod -aG docker $USER

Reboot

sudo shutdown -r now

Configure docker to start on boot.

sudo systemctl enable docker

Deploy Kubernetes with Kubeadm


Install Kubeadm, Kubelet, and Kubectl on All Nodes


Add the Google cloud package repository to your sources.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt update

Install components.

sudo apt install -y kubelet kubeadm kubectl

Hold version. This will keep us from unintentionally updating the cluster, or getting the versions out of sync between nodes.

sudo apt-mark hold kubelet kubeadm kubectl

Start the Cluster on Controller Node

sudo kubeadm init

When it completes successfully you'll get output similar to this.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.231:6443 --token tc38rh.ic3xx9mggovgg46m \
    --discovery-token-ca-cert-hash sha256:4cf9ff444fd92427de3faaaaaa6debdb0ebdb33f9bcd50d04f3a060e76307738

So... do what it says; run the following:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then choose and deploy a pod network. I've been using Calico.

curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml

Add Nodes to Cluster

Copy the "kubeadm join..." command from the output of running kubeadm init.

Run this on each of the three worker nodes.

sudo kubeadm join 192.168.1.231:6443 --token tc38rh.ic3xx9mggovgg46m \
    --discovery-token-ca-cert-hash sha256:4cf9ff444fd92427de3faaaaaa6debdb0ebdb33f9bcd50d04f3a060e76307738

Verify Cluster

Run kubectl get nodes and all of the nodes should eventually go from NotReady to Ready status.

watch kubectl get nodes

Not ready yet...

NAME             STATUS     ROLES                  AGE   VERSION
k8s-controller   Ready      control-plane,master   10m   v1.20.0
k8s-worker1      Ready      <none>                 63s   v1.20.0
k8s-worker2      NotReady   <none>                 16s   v1.20.0
k8s-worker3      NotReady   <none>                 8s    v1.20.0

Cluster is ready!

NAME             STATUS   ROLES                  AGE     VERSION
k8s-controller   Ready    control-plane,master   13m     v1.20.0
k8s-worker1      Ready    <none>                 4m16s   v1.20.0
k8s-worker2      Ready    <none>                 3m29s   v1.20.0
k8s-worker3      Ready    <none>                 3m21s   v1.20.0

Summary

where I write about some of the post-deployment updates I use to make my lab a bit easier to work with.



Or, now that you have a cluster up and running, try installing Portworx for free and get to know the newest addition to the Pure Storage family.


[Bonus] Installing Previous Version

I was completely unaware of the fact that as I was writing the original version of this blog, Kubernetes 1.20 had just been released. It's awesome to be on the bleeding edge but I wanted to install a 1.19 environment to make sure all of the tools I was working with would be fully supported. Here are the changes I had to make to be able to specify 1.19 instead of the latest version.


Docker

sudo apt-get install docker-ce=5:19.03.14~3-0~ubuntu-focal \ docker-ce-cli=5:19.03.14~3-0~ubuntu-focal containerd.io
sudo apt-mark hold docker-ce docker-ce-cli containerd.io


Kubelet, Kubeadm, Kubectl

sudo apt install -y kubelet=1.19.5-00 kubeadm=1.19.5-00 \
kubectl=1.19.5-00
sudo apt-mark hold kubelet kubeadm kubectl


Initialize Cluster with Kubeadm

Create a file ~/kubeadm-config.yaml:

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.19.0

Deploy Cluster

sudo kubeadm init --config ~/homelab/kubeadm-config.yaml

568 views0 comments
bottom of page