For network communications, Kubernetes presents four Service types – ClusterIP (the default one), NodePort, LoadBalancer, and ExternalName, plus the Ingress resources.
In this post, we will take a short overview of all of them, and will check how they are working.
I’m using AWS Elastic Kubernetes Service so keep in mind that we have AWS Virtual Private Cloud and AWS Application Load Balancers under the hood.
Contents
The network and Kubernetes
In a Kubernetes cluster you may need to use the following network scenarios:
direct communication between containers – is provided by the Pod abstraction and access by containers to each other via localhost inside of the same pod – see the pods
communication between Pod to Pod in a cluster scope- provided by network plugins, in case of AWS EKS see the AWS VPC CNI for Kubernetes
connections from Pod to a Service – is provided by the Service abstraction, for example – ClusterIP
connections from the world to pods in a cluster – is provided by the Service abstraction, with an external resource, for example – AWS Load Balancer
The main goal of the Services in Kubernetes is to provide persistent access to its pods without the necessity to look for a Pod’s IP each time when you’ll recreate such a pod.
Also, Services can provide a minimal load balancing between similar pods, see Services.
Preparation
For tests let’s create a deployment with a pod with NGINX web-server which will accept connections to its port 80:
kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
Check:
kk get deploy nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 53s
Because Services will look for pods by their labels – let’s check which labels this deployment have:
kubectl get deploy nginx -o jsonpath='{.metadata.labels}'
map[app:nginx]
Okay – it’s the app tag with the nginx value – remember it.
kubectl port-forward
To make sure that our pod is up and running and is able to handle incoming connections to the port 80 let use the kubectl port-forward. After we will check that it is working – we can start playing with the network setting from the Kubernetes cluster side.
Find the pod’s name:
kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-554b9c67f9-rwbp7 1/1 Running 0 40m
Pass it to the kubectl port-forward as the first argument, then specify a local port (8080), and port on the pod (80):
From the local machine check connection to the NGINX pod in the Kubernetes cluster:
curl localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Cool – “It works!”, we have a working pod and now we can use it for our Services.
Kubernetes Service types – an overview
Let’s take a brief overview of each type and then will start with examples:
ClusterIP: the default type, will create a Service resource with an IP address from the cluster’s pool, such a Service will be available from within the cluster only (or with kube-proxy)
NodePort: will open a TCP port on each WorkerNode EС2, “behind it” automatically will create a ClusterIP Service and will route traffic from this TCP port on an ЕС2 to this ClusterIP – such a service will be accessible from the world (obviously, if an EC2 has a public IP), or within a VPC
LoadBalancer: will create an external Load Balancer (AWS Classic LB), “behind it” automatically will create a NodePort, then ClusterIP and in this way will route traffic from the Load Balancer to a pod in a cluster
ExternalName: something like a DNS-proxy – in response to such a Service will return a record taken via CNAME of the record specified in the externalName
ClusterIP
The simplest type, the default one.
Will open access to an application within a cluster, without access from the world.
It can be used, for example, to give an access to a caching service to make it accessible for all pods in a namespace.
Because ClusterIP Service type will be accessible from the cluster only – we can use kubectl proxy to test it – this will open a local TCP port to the API-server and then we can use it to access our NGINX.
Start the proxy:
kubectl proxy --port=8080
Starting to serve on 127.0.0.1:8080
Now, knowing our Service name – we set it in the metadata: name – we can open a connection to the localhost:8080 and then via a namespace name – to the Service itself:
will provide access to an application within a Kubernetes cluster but without access from the world
will use an IP from the cluster’s IP-pool and will be accessible via a DNS-name in the cluster’s scope, see the DNS for Services and Pods
NodePort
Now, let’s take a closer look an the NodePort Service type.
With this type, Kubernetes will open a TCP port on every WorkerNode and then via kube-proxy working on all nodes will proxy requests from this TCP port to a pod on this node.
Obviously, if WorkerNodes are placed in private subnets and they have no access from the world – such a Service can’t be used to give access to an application from outside on the network.
Still, you can access the NGINX application from the same network, for example from a Bastion-host:
if the host isn’t available from the world – then such a Service will not provide external access to pods
will use an IP from a provider’s pool, for example, AWS VPC CIDR
will provide access to pods only on the same Worker Node
LoadBalancer
The most used Service type.
In case of AWS – will create an AWS Load Balancer, by default Classic type, which will proxy traffic to all ЕС2 instances of the TargetGroup tied to this Load Balancer, and then via NodePort Service – to all the pods.
On such a Load Balancer you can use TLS, can use various load balancer types – Internal/External, and so on, see the Other ELB annotations.
Actually, to have all possibilities on the AWS Application Load Balancer we can use another Service type – Ingress, will speak about it shortly in the Ingress part of this post.
So, the LoadBalancer Service type:
will provide external access to pods
will provide a basic load-balancing to pods on different EC2
will give an ability to terminate SSL/TLS sessions
doesn’t support level-7 routing
ExternalName
Another Service type is the ExternalName, which will redirect a request to a domain specified in its externalName parameter:
Here, we are asking a local DNS-name of the google-service, which was resolved to an IP of the google.com domain which was set in the externalName.
Ingress
Actually, the Ingress isn’t a dedicated Service – it just describes a set of rules for the Kubernetes Ingress Controller to create a Load Balancer, its Listeners, and routing rules for them.
Here we are creating a Service with the NodePort type and Ingress with the ALB type.
Kubernetes will create an Ingress object, then the alb-ingress-controller will see it, will create an AWS ALB сwith the routing rules from the spec of the Ingress, will create a Service object with the NodePort port, then will open a TCP port on WorkerNodes and will start routing traffic from clients => to the Load Balancer => to the NodePort on the EC2 => via Service to the pods.
Let’s create three DNS records – svc1.example.com, svc2.example.com, and just svc.example.com, and then via CNAME-records point them to our Load Balancer URL created from the Ingress resource.
Here we left Services without changes, but in the Rules, we set that a request to the svc1.example.com must be sent to the Service-1, svc2.example.com – to the Service-2, svc.example.com – to the default backend, e.g. Service-1.
Check it:
curl svc1.example.com
svc-1
curl svc2.example.com
svc-2
curl svc.example.com
svc-1
And check those rules on the Listener of the Load Balancer in the AWS Console: