Kubernetes Services}}

Kubernetes service is used to expose an application, it group pods running same application.

It acts as a load balancer for client request for an application.

When client send request to application kubernetes cluster use service and it rules to route it to one of the application pods.

Kubernetes service use labels to group pods in one application.

You can define a particular set of labels in yaml file to select which pods should be part of a service.

Kubernetes pods can be created and destroyed any time according to state of kubernetes cluster.

As deployment create and destroy pods dynamically, although each pod has its own IP address.

An internal application or external client which need access to application cannot keep track of changing condition of pods.

They use services to communicate, services IP and DNS did not change unless they are deleted.

Service route traffic to one of the pods according to rules.

Kubernetes service select pods based on their labels and group them together.

It uses kube proxy to route traffic which could be configured in iptables or ipvs mode.

We will cover kube proxy in kubernetes guides.

Kubernetes Service could use SessionAffinity setting to send all traffic from a client to same pod, if it is application requirement.

It uses port configured in yaml setting for receiving and sending traffic to pod.

You can use different port and receive traffic on generic ports like tcp 80, and sent it on and tcp port exposed by pod.

You can choose IP address used by kubernetes service using “spec.clusterIP” in services yaml, if required.

IP address should be part of service-cluster-ip-range CIDR range, as configured in API server settings.

For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that’s outside of your cluster.

Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.

Type values and their behaviors are:

Exposes the Service on a cluster-internal IP.

Choosing this value makes the Service only reachable from within the cluster.

This is the default ServiceType.

Exposes the Service on each Node’s IP at a static port (the NodePort).

A ClusterIP Service, to which the NodePort Service routes, is automatically created.

You’ll be able to contact the NodePort Service, from outside the cluster, by requesting :.

Exposes the Service externally using a cloud provider’s load balancer.

NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

Maps the Service to the contents of the externalName field (e.g. foo.bar.shrlrn.com), by returning a CNAME record with its value.

No proxying of any kind is set up.

You can also use Ingress to expose your Service.

Ingress is not a Service type, but it acts as the entry point for your cluster.

It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address.

We will cover Ingress in other lab and guide.

Exposes the Service on a cluster-internal IP.

Choosing this value makes the Service only reachable from within the cluster.

This is the default ServiceType.

Exposes the Service on each Node’s IP at a static port (the NodePort).

A ClusterIP Service, to which the NodePort Service routes, is automatically created.

You’ll be able to contact the NodePort Service, from outside the cluster, by requesting :.

You can configure kubernetes service to control how traffic from external sources is routed by setting the spec.externalTrafficPolicy field .

You may configure kubernetes service setting to Cluster, which will send external traffic to all pods which are in ready state.

If it is configured as local traffic will be send to pods on that node only, if there are no pods in ready state for that service on that node. Kube-proxy will not forward traffic for that service.

If you enable the ProxyTerminatingEndpoints feature gate for the kube-proxy, the kube-proxy checks if the node has local endpoints and whether or not all the local endpoints are marked as terminating.

If there are local endpoints and all of those are terminating, then the kube-proxy ignores any external traffic policy of Local.

Instead, while the node-local endpoints remain as all terminating, the kube-proxy forwards traffic for that Service to healthy endpoints elsewhere, as if the external traffic policy were set to Cluster.

This forwarding behavior for terminating endpoints exists to allow external load balancers to gracefully drain connections that are backed by NodePort Services, even when the health check node port starts to fail.

Otherwise, traffic can be lost between the time a node is still in the node pool of a load balancer and traffic is being dropped during the termination period of a pod.

You can set the spec.internalTrafficPolicy field to control how traffic from internal sources is routed.

Valid values are Cluster and Local.

Set the field to Cluster to route internal traffic to all ready endpoints and Local to only route to ready node-local endpoints.

If the traffic policy is Local and there are no pods on that node, traffic is dropped by kube-proxy.

Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS.


A cluster-aware DNS server, such as CoreDNS, watches the Kubernetes API for new Services and creates a set of DNS records for each one.

If DNS has been enabled throughout your cluster then all Pods should automatically be able to resolve Services by their DNS name.

For example,

if you have a Service called nginxsvc in a Kubernetes namespace prod, the control plane and the DNS Service acting together create a DNS record for nginxsvc.prod.

Pods in the prod namespace should be able to find the service by doing a name lookup for nginxsvc

(nginxsvc.prod would also work).

Pods in other namespaces must use the nginxsvc.prod as name.

These names will resolve to the cluster IP assigned for the Service.

Environment variables

When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service.

It supports both Docker links compatible variables and simpler {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores.

For example, the Service redis-master which exposes TCP port 6379 and has been allocated cluster IP address, produces the following environment variables:


When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client Pods, you must create the Service before the client Pods come into existence. Otherwise, those client Pods won’t have their environment variables populated.

If you only use DNS to discover the cluster IP for a Service, you don’t need to worry about this ordering issue.

Sometimes you don’t need load-balancing and a single Service IP.

Requirement could be to communicate with all pods without load balancing or a set of pods.

For example, if you have app or db with a single pod.

You need a service definition on top of it for taking care of the pod restart and for acquiring a new IP address.

But you don’t want any load balancing or routing.

You just need the service to patch the request to the back-end pod, you use Headless Service since it does not have an IP.

Kubernetes allows clients to discover pod IPs through DNS lookups.

Usually, when you perform a DNS lookup for a service, the DNS server returns a single IP which is the service’s cluster IP.

But if you don’t need the cluster IP for your service, you can set ClusterIP to None , then the DNS server will return the individual pod IPs instead of the service IP.

In this case, you can create what are termed “headless” Services, by explicitly specifying “None” for the cluster IP.

You can use a headless Service to interface with other service discovery mechanisms, without being tied to Kubernetes’ implementation.

For headless Services, a cluster IP is not allocated, kube-proxy does not handle these Services, and there is no load balancing or proxying done by the platform for them.

How DNS is automatically configured depends on whether the Service has selectors defined:

With selectors

For headless Services that define selectors, the endpoints controller creates Endpoints records in the API, and modifies the DNS configuration to return A records (IP addresses) that point directly to the Pods backing the Service.

Without selectors

For headless Services that do not define selectors, the endpoints controller does not create Endpoints records.

However, the DNS system looks for and configures either:

CNAME records for ExternalName-type Services.

A records for any Endpoints that share a name with the Service, for all other types.

This is a live document, we will be updateding it regualrly, consider adding it to your bookmarks.

join us on upcoming kubernetes workshop, training and or bootcamp