Kubernetes high availability

Share

Background

This document covers high-availabily implmentation on Kubenetes controller nodes.
We will be using 2 controllers here.
Here we are going to use keepalived s/w for hardware, OS and service level HA checks.

Install keepalived s/w on both controllers.

apt-get install -y keepalived

Configure HA using keepalived

On both primary controller add two scripts

One for kubelet service

vi /etc/keepalived/kubeletmon.sh
#!/bin/bash
if systemctl is-active --quiet kubelet; then
    exit 0
else
    exit 1
fi

Second for api server pod monitoring

vi /etc/keepalived/apicheck.sh
#!/bin/bash

APISERVER="https://127.0.0.1:6443/healthz"
if curl --silent --max-time 2 --insecure $APISERVER | grep -q "ok"; then
    echo "OK"
    exit 0
else
    echo "Not OK"
    exit 1
fi

Invoke both these scripts in keepalived config file as following.

Here 192.168.1.40 is the VIP address and shall stick on the machine which has higher “priority” value.

On primary controller (higher priority value)

vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived

vrrp_script chk_apiserver {
    script "/etc/keepalived/apicheck.sh"
    interval 3
    weight -20
}

vrrp_script chk_kubelet {
    script "/etc/keepalived/kubeletmon.sh"
    interval 3
    weight -20
}
vrrp_instance VI_1 {
    state MASTER1
    interface ens18
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass xxxxx
    }
    virtual_ipaddress {
        192.168.1.40
    }
    track_script {
        chk_apiserver
        chk_kubelet
    }

    # Allow packets addressed to the VIPs above to be received
    accept
}

On Second controller (lower priority value)

vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived

vrrp_script chk_apiserver {
    script "/etc/keepalived/apicheck.sh"
    interval 3
    weight -20
}

vrrp_script chk_kubelet {
    script "/etc/keepalived/kubeletmon.sh"
    interval 3
    weight -20
}
vrrp_instance VI_1 {
    state MASTER2
    interface ens18
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass xxxxx
    }
    virtual_ipaddress {
        192.168.1.40
    }
    track_script {
        chk_apiserver
        chk_kubelet
    }

    # Allow packets addressed to the VIPs above to be received
    accept
}

Enable and start keepalived on both controllers

sudo systemctl enable keepalived
sudo systemctl start keepalived

Initialize kubernetes using VIP address and pod n/w

sudo kubeadm init --control-plane-endpoint "192.168.1.40:6443" --pod-network-cidr=10.244.0.0/16

Join other controllers

On existing controller run following to Generate cert key

 sudo kubeadm init phase upload-certs --upload-certs
[sudo] password for kubuser:
I1111 14:30:12.055556   13624 version.go:261] remote version is much newer: v1.34.1; falling back to: stable-1.33
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
d36ebecc676231277045e528eb2a14037166972c9a5b7f6fbb7f0f81e4f04770

Note d36ebecc676231277045e528eb2a14037166972c9a5b7f6fbb7f0f81e4f04770 cert key

kubeadm token create --print-join-command
kubeadm join 192.168.1.40:6443 --token 4cr2ci.yhgqlsqi0qqnpzkq --discovery-token-ca-cert-hash sha256:d994d63aa2b178a89eeea887b9ec40daffda010af2b91d955e9b229508b22573

On newly created master nodes

kubeadm join <MASTER_IP>:6443 --token <token> \
  --discovery-token-ca-cert-hash sha256:<hash> \
  --control-plane --certificate-key <key>

eg:Copy to clipboard

kubeadm join 192.168.1.40:6443 --token 4cr2ci.yhgqlsqi0qqnpzkq --discovery-token-ca-cert-hash sha256:d994d63aa2b178a89eeea887b9ec40daffda010af2b91d955e9b229508b22

manish