imageThe primary reason that Pods can have multiple containers is to support helper applications that assist a primary application.

Typical examples of helper applications are data pullers, data pushers, and proxies.

Helper and primary applications often need to communicate with each other.

Typically this is done through a shared filesystem, as shown in this exercise, or through the loopback network interface, localhost.

An example of this pattern is a web server along with a helper program that polls a Git repository for new updates.

The Volume in this exercise provides a way for Containers to communicate during the life of the Pod. If the Pod is deleted and recreated, any data stored in the shared Volume is lost.

Task: Create Kubernetes cluster with 3 worker nodes.

Master: 1 node

Worker: 2 node

Hint

Solution

Create docker hub account. Docker Hub if you already have one skip this step

Open Play with Kubernetes login with your docker hub account.

Click on start

It will start a 4 hr session

create three instance

click on + ADD NEW INSTANCE three time to add three instances

imageon first instance enter below command, this node will be master node
kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16

enter below command on first node

kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml

capture output of kubeadm join XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

you may also use kubeadm token list to find token

use this command on second and third node kubeadm join <IP address of master/first node>:6443 –token –discovery-token-unsafe-skip-ca-verification

image

enter captured command in second and third node

kubeadm join  XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
imageimageCheck node status, all 3 nodes should be in ready state image

Task: Create a Pod that runs two Containers.

The two containers should share a Volume that they can use to communicate.

name: nginx-container
image: nginx
volume: shared-data
mountpath: /usr/share/nginx/html

name: debian-container
image: debian
volumeMounts:shared-data
mountPath: /pod-data
Solution

create a new file pod.yaml

vi pod.yaml

Press i to get in insert mode

apiVersion: v1
kind: Pod
metadata:
  name: two-containers
spec:
  restartPolicy: Never
  volumes:
  - name: shared-data
    emptyDir: {}
  containers:
  - name: nginx-container
    image: nginx
    volumeMounts:
    - name: shared-data
      mountPath: /usr/share/nginx/html
  - name: debian-container
    image: debian
    volumeMounts:
    - name: shared-data
      mountPath: /pod-data
    command: ["/bin/sh"]
    args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]

use escape to exit insert mode and :wq to save and exit vi

apiVersion, Kind, metadata.name and spec are required field

you may add labels

you can use any key: value pairs for labels

labels are used to select pods

provide name to container

provide image for container

In the configuration file, you can see that the Pod has a Volume named shared-data.

The first container listed in the configuration file runs an nginx server.

The mount path for the shared Volume is /usr/share/nginx/html.

The second container is based on the debian image, and has a mount path of /pod-data.

The second container runs the following command and then terminates.

echo Hello from the debian container > /pod-data/index.html

Notice that the second container writes the index.html file in the root directory of the nginx server.

kubectl apply -f pod.yaml

this command will create pod using yaml file

Task: View information about the Pod and the Containers

Solution
kubectl get pod two-containers --output=yaml

You can see that the debian Container has terminated, and the nginx Container is still running.

Get a shell to nginx Container:

kubectl exec -it two-containers -c nginx-container -- /bin/bash

In your shell, verify that nginx is running:

root@two-containers:/# apt-get update
root@two-containers:/# apt-get install curl procps
root@two-containers:/# ps aux

The output is similar to this:

USER       PID  ...  STAT START   TIME COMMAND
root         1  ...  Ss   21:12   0:00 nginx: master process nginx -g daemon off;

Recall that the debian Container created the index.html file in the nginx root directory. Use curl to send a GET request to the nginx server:

root@two-containers:/# curl localhost

The output shows that nginx serves a web page written by the debian container:

Hello from the debian container

Task: Configure process namespace sharing for a pod.

When process namespace sharing is enabled, processes in a container are visible to all other containers in that pod.

You can use this feature to configure cooperating containers, such as a log handler sidecar container, or to troubleshoot container images that don’t include debugging utilities like a shell.

name: nginx
image: nginx

name: shell
image: busybox
Solution

create a new file pod.yaml

vi pod.yaml

Press i to get in insert mode

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  shareProcessNamespace: true
  containers:
  - name: nginx
    image: nginx
  - name: shell
    image: busybox
    securityContext:
      capabilities:
        add:
        - SYS_PTRACE
    stdin: true
    tty: true

use escape to exit insert mode and :wq to save and exit vi

apiVersion, Kind, metadata.name and spec are required field

you may add labels

you can use any key: value pairs for labels

labels are used to select pods

provide name to container

provide image for container

kubectl apply -f pod.yaml

this command will create pod using yaml file

Task: Connect to shell container and restart nginx worker process

Solution
kubectl attach -it nginx -c shell

Attach to the shell container and run ps

If you don’t see a command prompt, try pressing enter.

You can signal processes in other containers.

Check process ID of nginx: worker process

For example, send SIGHUP to nginx to restart the worker process. This requires the SYS_PTRACE capability.

kill -HUP 8

It’s even possible to access another container image using the /proc/$pid/root link.

head /proc/8/root/etc/nginx/nginx.conf

use exit to come out of container

Task: Delete all open nodes/instances and close session

  1. Select the node and click on DELETE
  2. Repeat same for any other open nodes
  3. click close session

cleanup}}


Understanding Process Namespace Sharing

Pods share many resources so it makes sense they would also share a process namespace.

Some container images may expect to be isolated from other containers, though, so it’s important to understand these differences:

The container process no longer has PID 1.

Some container images refuse to start without PID 1 (for example, containers using systemd) or run commands like kill -HUP 1 to signal the container process.

In pods with a shared process namespace, kill -HUP 1 will signal the pod sandbox. (/pause in the above example.)

Processes are visible to other containers in the pod.

This includes all information visible in /proc, such as passwords that were passed as arguments or environment variables.

These are protected only by regular Unix permissions.

Container filesystems are visible to other containers in the pod through the /proc/$pid/root link.

This makes debugging easier, but it also means that filesystem secrets are protected only by filesystem permissions.

Click on ‘Submit Feedback’ on the bottom left of the page to submit any questions/feedback.