You can constrain a Pod so that it can only run on particular set of Node(s).
There are several ways to do this and the recommended approaches all use label selectors to facilitate the selection.
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement (e.g. spread your pods across nodes so as not place the pod on a node with insufficient free resources, etc.) but there are some circumstances where you may want to control which node the pod deploys to - for example to ensure that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different services that communicate a lot into the same availability zone.
nodeSelector
nodeSelector is the simplest recommended form of node selection constraint.
nodeSelector is a field of PodSpec.
It specifies a map of key-value pairs.
For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well).
The most common usage is one key-value pair.
Let’s practice through an example of how to use nodeSelector.
Create Kubernetes cluster with 3 worker nodes.
Master: 1 node
Worker: 2 node
Create docker hub account. Docker Hub if you already have one skip this step
Open Play with Kubernetes login with your docker hub account.
Click on start
It will start a 4 hr session
create three instance
click on + ADD NEW INSTANCE three time to add three instances
kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16
enter below command on first node
kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml
capture output of kubeadm join XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
you may also use kubeadm token list to find token
use this command on second and third node kubeadm join <IP address of master/first node>:6443 –token
enter captured command in second and third node
kubeadm join XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Task: Add label to node 1
disktype=ssd
You can verify with below command
kubectl get nodes --show-labels
or
kubectl get nodes -l disktype=ssd
Run kubectl get nodes to get the names of your cluster’s nodes.
Pick out the one that you want to add a label to, and then run below command to add a label to the node you’ve chosen
kubectl label nodes <node-name> <label-key>=<label-value>
Use below command to apply label
kubectl label nodes node2 disktype=ssd
Task: Create a pod with name demo and image nginx with yaml.
Make sure pod is deployed on node with label disktype:ssd
name: demo
image: nginx
create a new file pod.yaml
vi pod.yaml
Press i to get in insert mode
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
app: demo
type: web
spec:
containers:
nodeSelector:
disktype: ssd
- name: demo-nginx
image: nginx
ports:
- containerPort: 80
use escape to exit insert mode and :wq to save and exit vi
apiVersion, Kind, metadata.name and spec are required field
you may add labels
you can use any key: value pairs for labels
labels are used to select pods
provide name to container
provide image for container
kubectl apply -f pod.yaml
this command will create pod using yaml file
Task: Find node where pod is running ?
kubectl get pods -o wide
Task: Create a Deployment with below settings
Make sure pod is deployed on node with label disktype:ssd
name: demo
image: nginx
replca: 3
labels:
app: demo
type: web
verify with below command
kubectl get deployment
vi deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
labels:
app: demo
type: web
spec:
replicas: 3
selector:
matchLabels:
type: web
template:
metadata:
labels:
type: web
spec:
nodeSelector:
disktype: ssd
containers:
- name: demo-nginx
image: nginx
ports:
- containerPort: 80
kubectl apply -f deployment.yaml
this command will create deployment using setting in yaml file
Task: How many pods are created by Deployment?
kubectl get rs
kubectl get pods
Task: Which nodes are pods running on?
kubectl get pods -o wide
Task: Delete all open nodes/instances and close session