Usually, we don’t see Endpoints objects when using Kubernetes Services, as they are working under the hood, similarly to ReplicaSets which are “hidden” behind Kubernetes Deployments.
So, Service is a Kubernetes abstraction that uses labels to chose pods to route traffic to, see the Kubernetes: ClusterIP vs NodePort vs LoadBalancer, Services, and Ingress – an overview with examples and Kubernetes: Service, load balancing, kube-proxy, and iptables:
apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376
As soon as a new pod appear in a cluster, with labels matching with Service’s
app=MyApp in the example above – Service will start sending traffic to it.
This is achieved by adding an IP address of this Pod to the Endpoints list of this Service.
Let’s create a simple example:
--- apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx spec: containers: - name: nginx-container image: nginx ports: - name: web containerPort: 80 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: nginx-svc spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80
Here, we are creating a Pod with NGINX, and a Service with the default type
Apply the manifest:
Check the Service:
Now, let’s take a closer look at it:
In the end, we can the Endpoints of this Service – the IP od the pod.
Check this Pod:
Here is the IP mentioned above.
And now, let’s check the Ednpointds, which are dedicated API-objects and which can be observed in the same way as Services and Pods:
If we will add other pods with the same labels by describing them as additional objects in the manifest file or by creating a Deployment – those pods will be added as Endpoints for the Service:
--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx-container image: nginx ports: - name: web containerPort: 80 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: nginx-svc spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80
Create the Deployment:
And check the Endpoints:
Here we can see our 10.21.56.143:80 from the previous pod, and two new – from the pods specified in the
replicas of the Deployment above.
Find those pods by using the
--selector, similarly, as a Service looks for pods to add them to its Endpoints:
We also can create a custom endpoint that will be pointed to any desired resource.
For example, describe a new Service:
kind: Service apiVersion: v1 metadata: name: external-svc spec: ports: - name: web protocol: TCP port: 80 targetPort: 80
Pay attention, that in this case, we didn’t add the
And describe the Endpoints object:
kind: Endpoints apiVersion: v1 metadata: name: external-svc subsets: - addresses: - ip: 184.108.40.206 ports: - port: 80 name: web
name: must be the same as the Service
addresses: an address to send traffic to, in this example this an IP address of a server in the DigitalOcean cloud where the rtfm.co.ua is leaving, but you can set multiply address so the Service will do load-balancing between them as described in the Kubernetes: Service, load balancing, kube-proxy, and iptables
ports.namealso must be the same as the corresponding Service
Check the Service and its Endpoints:
Run a Pod to check if this Service working:
curl in this pod:
And check the Service by its name:
Or by using its FQDN:
Another solution to access an external resource can be using a Service with the
--- apiVersion: v1 kind: Service metadata: name: rtfm-service spec: ports: - port: 80 type: ExternalName externalName: rtfm.co.ua
Apply, and check:
Also published on Medium.