Kubernetes: update AWS Route53 DNS from an Ingress

By | 11/22/2020

We’d like to have the ability to add a DNS-record on the AWS Route53 when a Kubernetes Ingress resource is deployed and point this record to the URL of an AWS Load Balancer which is created by the ALB Ingress controller.

To achieve this, the ExternalDNS can be used which will make API-requests to the AWS Route53 to add appropriate records.

AWS installation is described in its documentation>>>.

AWS set up

IAM Policy

First, need to create an IAM policy. For the testing purpose let’s create it with access to only one hosted domain.

Go to the Route53, find a zone’s ID:

Go to the IAM > Policies, add a new policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "route53:ChangeResourceRecordSets"
      ],
      "Resource": [
        "arn:aws:route53:::hostedzone/Z07***ZM6"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:ListHostedZones",
        "route53:ListResourceRecordSets"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

Save it:

Find it, and copy its ARN:

IAM Role

Our AWS Elastic Kubernetes Service cluster was created with ekctl, see the AWS Elastic Kubernetes Service: a cluster creation automation, part 2 – Ansible, eksctl post.

Attach OIDC:

[simterm]

$ eksctl utils associate-iam-oidc-provider --region=us-east-2 --cluster=bttrm-eks-dev-1 --approve

[/simterm]

Create a ServiceAccount:

[simterm]

$ eksctl --profile arseniy create iamserviceaccount --name external-dns --cluster bttrm-eks-dev-1 --attach-policy-arn arn:aws:iam::534***385:policy/AllowExternalDNSUpdates --approve --override-existing-serviceaccounts

[/simterm]

Go to the AWS CloudFormation, find the stack created by the eksctl, and a new role inside:

Coy this Role ARN:

ExternalDNS

Check if RBAC is enabled in your clutser:

[simterm]

$ kubectl api-versions | grep rbac.authorization.k8s.io
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1

[/simterm]

Create a deployment, see the Manifest (for clusters with RBAC enabled), in its ServiceAccount annotations specify the role created above, and in the --domain-filter – set your domain example.com as we still want to test the ExternalDNS with only one domain instead of giving it access to all domains in this AWS account:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns
  # If you're using Amazon EKS with IAM Roles for Service Accounts, specify the following annotation.
  # Otherwise, you may safely omit it.
  annotations:
    # Substitute your account ID and IAM service role name below.
    eks.amazonaws.com/role-arn: arn:aws:iam::534***385:role/eksctl-bttrm-eks-dev-1-addon-iamserviceaccou-Role1-LOQOWXLJ8SD3
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: external-dns
rules:
- apiGroups: [""]
  resources: ["services","endpoints","pods"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
  resources: ["ingresses"]
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: external-dns
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: k8s.gcr.io/external-dns/external-dns:v0.7.3
        args:
        - --source=service
        - --source=ingress
        - --domain-filter=example.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
        - --provider=aws
        - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
        - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
        - --registry=txt
        - --txt-owner-id=bttrm-eks-dev-1-external-dns
      securityContext:
        fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files

The documentation didn’t  describe what exactly is the “txt-owner-id” option here, so – it defines a TXT-record value to be used by the ExternalDNS when it creates new DNS-records to mark them as “owned by ExternalDNS”.

For example, if we already have a subdomain record subdomain.example.com, and it has no such a TXT-record which is added by the ExternalDNS then when creating a new Ingress with host: subdomain.example.com – ExternalDNS will do nothing with such an existing record.

We will play with these TXT records a bit later.

For now, deploy the ExternalDNS resources:

[simterm]

$ kubectl apply -f ~/Work/Temp/EKS/external-dns-deployment.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/external-dns configured
clusterrole.rbac.authorization.k8s.io/external-dns created
clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer created
deployment.apps/external-dns created

[/simterm]

Check the pod:

[simterm]

$ kubectl get pod -l app=external-dns
NAME                           READY   STATUS    RESTARTS   AGE
external-dns-75894b84b-2qnr5   1/1     Running   0          2m2s

[/simterm]

Kubernetes Ingress and AWS Application LoadBalancer

And let go to check our ExternalDNS.

Create a Deployment and a Service:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-deployment
  labels:
    app: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
      dnsPolicy: None
---
apiVersion: v1
kind: Service
metadata:
  name: test-svc
spec:
  type: NodePort
  selector:
    app: test
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Create an Ingress, in its annotation external-dns.alpha.kubernetes.io/hostname specify the desired domain (or in the spec.rules.host):

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
    alb.ingress.kubernetes.io/inbound-cidrs: 0.0.0.0/0
    external-dns.alpha.kubernetes.io/hostname: test-dns.example.com
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: test-svc
          servicePort: 80

Deploy it:

[simterm]

$ kubectl apply -f ~/Work/devops-kubernetes/tests/test-deployment.yaml 
deployment.apps/test-deployment created
service/test-svc created
ingress.extensions/test-ingress created

[/simterm]

Check the Ingress:

[simterm]

$ kubectl get ingress test-ingress
NAME           HOSTS   ADDRESS                                                                  PORTS   AGE
test-ingress   *       e172ad3e-default-testingre-5bb0-1920769979.us-east-2.elb.amazonaws.com   80      28s

[/simterm]

Pod’s logs:

[simterm]

$ kubectl logs external-dns-75894b84b-2qnr5
...
time="2020-11-13T12:31:13Z" level=info msg="Desired change: CREATE test-dns.example.com A [Id: /hostedzone/Z07***ZM6]"
time="2020-11-13T12:31:13Z" level=info msg="Desired change: CREATE test-dns.example.com TXT [Id: /hostedzone/Z07***ZM6]"
time="2020-11-13T12:31:13Z" level=info msg="2 record(s) in zone example.com. [Id: /hostedzone/Z07***ZM6] were successfully updated"

[/simterm]

And DNS:

Pay attention to the TXT with the “heritage=external-dns,external-dns/owner=bttrm-eks-dev-1-external-dns,external-dns/resource=ingress/default/test-ingress” value – here is where the txt-owner-id parameter is used.

Check the domain:

[simterm]

$ dig test-dns.example.com +short
3.133.54.4
13.59.209.195

$ curl -I test-dns.example.com.com
HTTP/1.1 200 OK

[/simterm]

Records updates and apex domains

To allow ExternalDNS to update and delete records – remove the --policy=upsert-only option in its Deployment.

Also, a root-level (apex) domain must have no any TXT records existing, otherwise, ExternalDNS will not update it. Even more – you must have no already existing record for the root-level domain at all (but we will see have to update existing records below).

In our case, we had a TXT record for the Google verification and I had to remove it.

Update the Ingress specify a subdomain and a root-domain:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
    alb.ingress.kubernetes.io/inbound-cidrs: 0.0.0.0/0
    external-dns.alpha.kubernetes.io/hostname: test-dns.example.com, example.com
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: test-svc
          servicePort: 80

Update and check the Ingress:

[simterm]

$ kubectl get ingress test-ingress
NAME           HOSTS   ADDRESS                                                                 PORTS   AGE
test-ingress   *       e172ad3e-default-testingre-5bb0-456442783.us-east-2.elb.amazonaws.com   80      30m

[/simterm]

Check logs:

[simterm]

time="2020-11-14T12:47:16Z" level=info msg="Desired change: CREATE example.com A [Id: /hostedzone/Z07***ZM6]"
time="2020-11-14T12:47:16Z" level=info msg="Desired change: CREATE example.comm TXT [Id: /hostedzone/Z07***ZM6]"
time="2020-11-14T12:47:16Z" level=info msg="2 record(s) in zone example.com. [Id: /hostedzone/Z07***ZM6] were successfully updated"

[/simterm]

And the domain itself:

Update an existing root-level-domain

The first thing I wanted to try to be able to update an existing domain was to add a TXT-record.

For example, let’s say we have an example.com domain with the IN A 1.1.1.1 record and we don’t want to remove it but instead – need to update when deploying a new Ingress.

At this moment, if just add the example.com to an Ingress – ExternalDNS will not update the existing record as there is no appropriate TXT-record which indicates to ExternalDNS that the record is “owned” by him.

So, the idea was to add  TXT with the txt-owner-id i.e. “heritage=external-dns,external-dns/owner=bttrm-eks-dev-1-external-dns,external-dns/resource=ingress/default/test-ingress“, and then deploy the Ingress with the host set.

But in this case, ExternalDNS deletes the existing record before I was able to deploy the Ingress.

Thus, the flow can be next:

  1. we have domain example.com with the IN A 1.1.1.1 in the Route53
  2. will deploy a new Ingress with the domain specified
  3. ExternalDNS will not update the existing record 1.1.1.1 as there is no “binding” TXT record
  4. then we will create the TXT manually and on the next check ExternalDNS will see it, and think this record managed by him and he has permissions to manage it and will update it with the appropriate value with the Ingress URL

Let’s check.

Remove all records already created by the ExternalDNS in previous tests, manually add the example.com with IN A 1.1.1.1, deploy the Ingress with the example.com set – ExternalDNS will say that “All records are already up to date” as there is no binding TXT.

In the Route53 the record now looks like so:

Now, add the TXT:

And check the logs:

[simterm]

time="2020-11-14T13:39:44Z" level=info msg="Desired change: UPSERT example.com A [Id: /hostedzone/Z07***ZM6]"
time="2020-11-14T13:39:44Z" level=info msg="Desired change: UPSERT example.com TXT [Id: /hostedzone/Z07***ZM6]"
time="2020-11-14T13:39:45Z" level=info msg="2 record(s) in zone example.com. [Id: /hostedzone/Z07***ZM6] were successfully updated"

[/simterm]

ExternalDNS now performed the UPSERT call instead of DELETE or CREATE, and the record was updated as we planned:

Done.