image

etcd is a key value store used by kubernetes to store all data

It provides consistency and high availability

You should plan to backup etcd regularly

More information on etcd can be found here

Agenda of this lab is to practice taking backup and restoring etcd

IP address of master should be same otherwise restore get complicated due to certificate issues

Lets Practice

Task: Create Kubernetes cluster with 3 worker nodes.

Master: 1 node

Worker: 2 node

Hint

Solution

Create docker hub account. Docker Hub if you already have one skip this step

Open Play with Kubernetes login with your docker hub account.

Click on start

It will start a 4 hr session

create three instance

click on + ADD NEW INSTANCE three time to add three instances

image

on first instance enter below command, this node will be master node

kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16

enter below command on first node

kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml

capture output of kubeadm join XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

you may also use kubeadm token list to find token

use this command on second and third node kubeadm join <IP address of master/first node>:6443 –token –discovery-token-unsafe-skip-ca-verification

image

enter captured command in second and third node

kubeadm join  XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
image image

Check node status, all 3 nodes should be in ready state

image

Task: Create a pod with name web and image nginx

name: web

image: nginx

Use Kubectl command

verify with below command

kubectl get pods
Solution

this command will create pod with image nginx and name web

kubectl run web --image=nginx

Task: Create a pod using Kubectl and expose port, port should be accessible from outside kubernetes cluster

name: web

image: nginx

port: 80

verify with below command

kubectl get all
Solution
 kubectl expose pod web --port 80 --name=nginx-svc --type=NodePort --target-port=80

Task: Find port used by nodeport service in Kubernetes to expose container

Solution
kubectl get all

Task: Create a deployment with following details:

name: demo

image: nginx

Use Kubectl command

verify with below command

kubectl create deployment demo --image=nginx
Solution

this command will create a deployment with name demo and image nginx

kubectl run web --image=nginx

Task: Check state of all objects in default namespace

Solution
kubectl get all

Task: Find name of etcd pod in this kubernetes cluster

Solution
kubectl get pods -n kube-system

Task: Copy etcdctl from etc pod to local bin directory

Solution
kubectl cp  -n kube-system <name of etcd pod>:usr/local/bin/etcdctl etcdctl
ls -la

Task: Change permission on etcdctl file to executable

Solution
chmod +x etcdctl
ls -la

Task: Move etcdctl to any directory in PATH

Solution
cp etcdctl /usr/bin/

Task: Check etcdctl version

Solution
etcdctl  version

Task: Check etcd config in yaml file

Solution
cat /etc/kubernetes/manifests/etcd.yaml
--advertise-client-urls
--cert-file
--peer-cert-file
--peer-key-file
--data-dir
--initial-advertise-peer-urls
--initial-cluster

Task: Take etcd backup

Solution

Test etcdctl command

ETCDCTL_API=3 etcdctl --endpoints=https://<use --advertise-client-urls>:2379 --cacert=<use cert-file from above command > --cert=<use peer-cert-file from above command> --key=<use peer-key-file from above command> version

First set of words are to set etcdctl api version to 3 as there are some incompatibilities with previous version

--etcdctl for taking backup require endpoints information to connect and backup (these could be more than one in case of HA)
--cacert option is required for taking backup to verify certificates of TLS-enabled secure servers using this CA bundle
--cert option is required for taking backup to identify secure client using this TLS certificate file
--key option is required for taking backup to identify secure client using this TLS key file

Use Below command to backup

ETCDCTL_API=3 etcdctl --endpoints=https://<use --advertise-client-urls>:2379 --cacert=<use cert-file from above command > --cert=<use peer-cert-file from above command> --key=<use peer-key-file from above command> snapshot save <location of file where you want to save backup>

First set of words are to set etcdctl api version to 3 as there are some incompatibilities with previous version

--etcdctl for taking backup require endpoints information to connect and backup (these could be more than one in case of HA)
--cacert option is required for taking backup to verify certificates of TLS-enabled secure servers using this CA bundle
--cert option is required for taking backup to identify secure client using this TLS certificate file
--key option is required for taking backup to identify secure client using this TLS key file

Task: Check etcd backup status

Solution
etcdctl snapshot status etcdbackup

Task: Delete deployment demo, pod web and svc nginx-svc

Solution
kubectl delete deployment demo

delete deployment demo

Step 21 :

kubectl delete pod web

delete pod web

Step 23 :

kubectl delete service/nginx-svc

delete service we created

Task: Check state of all objects in default namespace

Solution
kubectl get all

Task: Stop kubelet service on master node

Solution
systemctl stop kubelet

Task: Stop docker service on master node

Solution
systemctl stop docker

Task: check directory used by etcd, as mentioned in etcd config

Solution
ls -la /var/lib/

Task: Restore etcd

Solution
ETCDCTL_API=3 etcdctl  --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://<use initial-advertise-peer-urls from step 16>:2380 --initial-cluster=default=https://<use initial-cluster from step 16>:2380 snapshot restore <use file where you save backup in step 16>

First set of words are to set etcdctl api version to 3 as there are some incompatibilities with previous version

--data-dir is required to create directory and restore etcd
--initial-advertise-peer-urls is required
--initial-cluster is required

this information could be obtained from earlier step

Task: Start docker service on master node

Solution
systemctl start docker

Task: Start kubelet service on master node

Solution
systemctl start kubelet

Task: Enter below command in master node terminal

kubectl get nodes

Task: Check state of all objects in default namespace

Solution
kubectl get all

Task: Delete all open nodes/instances and close session

  1. Select the node and click on DELETE
  2. Repeat same for any other open nodes
  3. click close session
cleanup
Click on ‘Submit Feedback’ on the bottom left of the page to submit any questions/feedback.