Configure Master Node
Prepare master VM
SSH to kubernetes-master
VM and follow instructions to prepare it for kubernetes.
Initialize the cluster
Network
If necessary, update the following command with IP addresses specific to your environment.
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --upload-certs --control-plane-endpoint=10.0.0.221
Review kubeadm
output
The kubeadm
command above is critical because it sets up your kubernetes cluster. It must complete successfully. If it fails, you have to go back to previous steps and fix the issues before proceeding through the rest of this guide.
Upon succesful completion, kubeadm init
will exit with the message Your Kubernetes control-plane has initialized successfully!
accompanied by important information about your cluster configuration, including tokens and keys that are needed to join additional nodes to the cluster.
Important
Capture and store cluster tokens and keys in a secure place. They represent critical sensitive information about your setup. You don't want to lose them and you don't want them exposed.
Enable your user to manage the cluster
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install CNI networking plugin
Download Calico CNI plugin manifest files:
wget https://docs.projectcalico.org/manifests/tigera-operator.yaml
wget https://docs.projectcalico.org/manifests/custom-resources.yaml
Network
Calico manfest contains pod network CIDR range hardcoded with 192.168.0.0/16
. If you are using a different range in your network, you must modify the manifest to match your network setup.
To accomplish that, in custom-resources.yaml
, find the ipPools
section and change it to match your environment, i.e:
ipPools:
- blockSize: 20
cidr: 172.16.0.0/16
Run the following commands to install the Calico CNI plugin:
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml
Check cluster health
kubectl cluster-info
kubectl get pods --all-namespaces
kubectl get services --all-namespaces
All pods should be in the running
state and there should be no error messages.