It provides consistency and high availability
You should plan to backup etcd regularly
More information on etcd can be found here
Agenda of this lab is to practice taking backup and restoring etcd
IP address of master should be same otherwise restore get complicated due to certificate issues
Lets Practice
Task: Create Kubernetes cluster with 3 worker nodes.
Master: 1 node
Worker: 2 node
Create docker hub account. Docker Hub if you already have one skip this step
Open Play with Kubernetes login with your docker hub account.
Click on start
It will start a 4 hr session
create three instance
click on + ADD NEW INSTANCE three time to add three instances
kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16
enter below command on first node
kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml
capture output of kubeadm join XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
you may also use kubeadm token list to find token
use this command on second and third node kubeadm join <IP address of master/first node>:6443 –token
enter captured command in second and third node
kubeadm join XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Task: Create a pod with name web and image nginx
name: web
image: nginx
Use Kubectl command
verify with below command
kubectl get pods
this command will create pod with image nginx and name web
kubectl run web --image=nginx
Task: Create a pod using Kubectl and expose port, port should be accessible from outside kubernetes cluster
name: web
image: nginx
port: 80
verify with below command
kubectl get all
kubectl expose pod web --port 80 --name=nginx-svc --type=NodePort --target-port=80
Task: Find port used by nodeport service in Kubernetes to expose container
kubectl get all
Task: Create a deployment with following details:
name: demo
image: nginx
Use Kubectl command
verify with below command
kubectl create deployment demo --image=nginx
this command will create a deployment with name demo and image nginx
kubectl run web --image=nginx
Task: Check state of all objects in default namespace
kubectl get all
Task: Find name of etcd pod in this kubernetes cluster
kubectl get pods -n kube-system
Task: Copy etcdctl from etc pod to local bin directory
kubectl cp -n kube-system <name of etcd pod>:usr/local/bin/etcdctl etcdctl
ls -la
Task: Change permission on etcdctl file to executable
chmod +x etcdctl
ls -la
Task: Move etcdctl to any directory in PATH
cp etcdctl /usr/bin/
Task: Check etcdctl version
etcdctl version
Task: Check etcd config in yaml file
cat /etc/kubernetes/manifests/etcd.yaml
--advertise-client-urls
--cert-file
--peer-cert-file
--peer-key-file
--data-dir
--initial-advertise-peer-urls
--initial-cluster
Task: Take etcd backup
Test etcdctl command
ETCDCTL_API=3 etcdctl --endpoints=https://<use --advertise-client-urls>:2379 --cacert=<use cert-file from above command > --cert=<use peer-cert-file from above command> --key=<use peer-key-file from above command> version
First set of words are to set etcdctl api version to 3 as there are some incompatibilities with previous version
--etcdctl for taking backup require endpoints information to connect and backup (these could be more than one in case of HA)
--cacert option is required for taking backup to verify certificates of TLS-enabled secure servers using this CA bundle
--cert option is required for taking backup to identify secure client using this TLS certificate file
--key option is required for taking backup to identify secure client using this TLS key file
Use Below command to backup
ETCDCTL_API=3 etcdctl --endpoints=https://<use --advertise-client-urls>:2379 --cacert=<use cert-file from above command > --cert=<use peer-cert-file from above command> --key=<use peer-key-file from above command> snapshot save <location of file where you want to save backup>
First set of words are to set etcdctl api version to 3 as there are some incompatibilities with previous version
--etcdctl for taking backup require endpoints information to connect and backup (these could be more than one in case of HA)
--cacert option is required for taking backup to verify certificates of TLS-enabled secure servers using this CA bundle
--cert option is required for taking backup to identify secure client using this TLS certificate file
--key option is required for taking backup to identify secure client using this TLS key file
Task: Check etcd backup status
etcdctl snapshot status etcdbackup
Task: Delete deployment demo, pod web and svc nginx-svc
kubectl delete deployment demo
delete deployment demo
Step 21 :
kubectl delete pod web
delete pod web
Step 23 :
kubectl delete service/nginx-svc
delete service we created
Task: Check state of all objects in default namespace
kubectl get all
Task: Stop kubelet service on master node
systemctl stop kubelet
Task: Stop docker service on master node
systemctl stop docker
Task: check directory used by etcd, as mentioned in etcd config
ls -la /var/lib/
Task: Restore etcd
ETCDCTL_API=3 etcdctl --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://<use initial-advertise-peer-urls from step 16>:2380 --initial-cluster=default=https://<use initial-cluster from step 16>:2380 snapshot restore <use file where you save backup in step 16>
First set of words are to set etcdctl api version to 3 as there are some incompatibilities with previous version
--data-dir is required to create directory and restore etcd
--initial-advertise-peer-urls is required
--initial-cluster is required
this information could be obtained from earlier step
Task: Start docker service on master node
systemctl start docker
Task: Start kubelet service on master node
systemctl start kubelet
Task: Enter below command in master node terminal
kubectl get nodes
Task: Check state of all objects in default namespace
kubectl get all
Task: Delete all open nodes/instances and close session