Let’s proceed with our journey with Istio.
Previous parts:
Besides Istio, in this post, we will also configure ExternalDNS, see the Kubernetes: update AWS Route53 DNS from an Ingress for details.
Everything described below is a kind of Proof of Concept and will be deployed to the same AWS Elastic Kubernetes Service Dev cluster.
Contents
Goals
So, now I have three tasks:
- test how will work one shared AWS Application LoadBalancer (AWS ALB) and Istio Ingress Gateway with applications in different namespaces
- create a Helm chart with templates to be able to select to create an Ingress, Istio Gateway, and Istio VirtualService
- configure ExternalDNS to create records in the AWS Route53 when adding an Istio Gateway or VirtualService
At first, let’s see how Istio Ingress Gateway will work with applications, located in dedicated namespaces. For this, we will create an Ingress which will create an AWS Application LoadBalancer with ALB Ingress Controller, and two testing applications, each with its own Service, Gateway, and VirtualService.
In the Ingress/AWS ALB we will describe hosts and this will trigger ExternalDNS to create records. Also, here we will do SSL termination – will attach SSL certificates from the AWS Certificate Manager.
In the Gateway of each of those applications, we will open ports on the Istio Ingress Gateway, and add hosts for which this Gateway will accept traffic.
Shared Ingress will be created in the istio-system
namespace as it needs to access Istio Ingress Service.
Actually, we could create one shared Gateway, but at the end of the discussion here>>> people saying that will be better to have a dedicated Gateway per each application, and this seems to be more correct, so let’s do it in this way.
Thus, now we will create:
- dedicated Ingress/AWS ALB with two test records in Route53 in the
istio-system
namespace - test app1 and app2 in namespaces
backend-app-1-ns
andbackend-app-2-ns
accordingly, each app with its own Deployment, Service, Gateway, and VirtualService
The second task will more interesting: will need to create Helm templates to deploy applications on different environments (kind of Dev and Prod) to deploy applications on various environments with different Ingress and Istio configurations.
And in the third task, we will configure ExternalDNS to work with Istio Gateway and VirtualService.
Some parts below could be confusing but I’ve tried to describe them as simply as I know-how.
Shared Ingress/AWS ALB
Create a manifest file for the Ingress/ALB – common-ingress-gateway.yaml
:
--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: backend-common-alb namespace: istio-system annotations: # create AWS Application LoadBalancer kubernetes.io/ingress.class: alb # external type alb.ingress.kubernetes.io/scheme: internet-facing # AWS Certificate Manager certificate's ARN alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:us-east-2:534***385:certificate/db886018-c173-43d0-b1be-b940de71f2a2" # open ports 80 and 443 alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' # redirect all HTTP to HTTPS alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' # ExternalDNS settings: https://rtfm.co.ua/en/kubernetes-update-aws-route53-dns-from-an-ingress/ external-dns.alpha.kubernetes.io/hostname: "app1-test-common-ingress.example.com, app2-test-common-ingress.example.com" spec: rules: - http: paths: - path: /* backend: serviceName: ssl-redirect servicePort: use-annotation - path: /* backend: serviceName: istio-ingressgateway servicePort: 80
Here, in the annotations
, we are describing domains to be created by the ExternalDNS, and they will be mapped to the URL of the AWS ALB, and in the backend
we will send all traffic to the istio-ingressgateway
Kubernetes Service.
Deploy it:
[simterm]
$ kubectl apply -f common-ingress-gateway.yaml ingress.extensions/backend-common-alb created
[/simterm]
Check the Ingress/ALB:
[simterm]
$ kubectl -n istio-system get ingress backend-common-alb NAME CLASS HOSTS ADDRESS PORTS AGE backend-common-alb <none> * aadca942-***1826302788.us-east-2.elb.amazonaws.com 80 72s
[/simterm]
And ExternalDNS logs:
[simterm]
... time="2021-04-12T09:45:02Z" level=info msg="Desired change: CREATE app-1-test-common-ingress.example.com A [Id: /hostedzone/Z30KLN6M3D0LB6]" time="2021-04-12T09:45:02Z" level=info msg="Desired change: CREATE app-1-test-common-ingress.example.com TXT [Id: /hostedzone/Z30KLN6M3D0LB6]" time="2021-04-12T09:45:02Z" level=info msg="Desired change: CREATE app-2-test-common-ingress.example.com A [Id: /hostedzone/Z30KLN6M3D0LB6]" time="2021-04-12T09:45:02Z" level=info msg="Desired change: CREATE app-2-test-common-ingress.example.com TXT [Id: /hostedzone/Z30KLN6M3D0LB6]" time="2021-04-12T09:45:03Z" level=info msg="4 record(s) in zone example.com. [Id: /hostedzone/Z30KLN6M3D0LB6] were successfully updated" ...
[/simterm]
Try access the URL, must get an error as Istio Ingress Gateway is not configured yet:
[simterm]
$ curl -I https://app-1-test-common-ingress.example.com HTTP/2 502 date: Mon, 12 Apr 2021 09:46:11 GMT server: istio-envoy
[/simterm]
Yup, the 502 error as Istio Ingress Gateway has no routes for the domains yet:
[simterm]
$ istioctl proxy-config routes -n istio-system istio-ingressgateway-8459df68cb-bh76b NOTE: This output only contains routes loaded via RDS. NAME DOMAINS MATCH VIRTUAL SERVICE * /healthz/ready* * /stats/prometheus*
[/simterm]
Now, let’s create testing applications where we will describe settings for the Istio Ingress Gateway.
Testing applications
Both applications are absolutely similar excepting namespaces and names of applications and services.
Here, we will create:
- Namespace
- Deployment
- Service
- Gateway
- VirtualService
The whole manifest is the next:
--- apiVersion: v1 kind: Namespace metadata: name: backend-app-1-ns labels: istio-injection: enabled --- apiVersion: apps/v1 kind: Deployment metadata: name: backend-app-1-deploy namespace: backend-app-1-ns labels: app: backend-app-1 version: v1 spec: replicas: 2 selector: matchLabels: app: backend-app-1 template: metadata: labels: app: backend-app-1 version: v1 spec: containers: - name: app1 image: nginxdemos/hello ports: - containerPort: 80 resources: requests: cpu: 100m memory: 100Mi readinessProbe: httpGet: path: / port: 80 --- apiVersion: v1 kind: Service metadata: name: backend-app-1-servcie namespace: backend-app-1-ns spec: selector: app: backend-app-1 ports: - name: http protocol: TCP port: 80 --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: backend-app-1-gateway namespace: backend-app-1-ns spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "app-1-test-common-ingress.example.com" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: backend-app-1-virtualservice namespace: backend-app-1-ns spec: hosts: - "app-1-test-common-ingress.example.com" gateways: - backend-app-1-gateway http: - match: - uri: prefix: / route: - destination: host: backend-app-1-servcie port: number: 80
Deploy it:
[simterm]
$ kubectl apply -f app1.yaml namespace/backend-app-1-ns created deployment.apps/backend-app-1-deploy created service/backend-app-1-servcie created gateway.networking.istio.io/backend-app-1-gateway created virtualservice.networking.istio.io/backend-app-1-virtualservice created
[/simterm]
Copy the manifest’s file, change app-1 to app-2, deploy too:
[simterm]
$ kubectl apply -f app2.yaml namespace/backend-app-2-ns created deployment.apps/backend-app-2-deploy created service/backend-app-2-servcie created gateway.networking.istio.io/backend-app-2-gateway created virtualservice.networking.istio.io/backend-app-2-virtualservice created
[/simterm]
Check Istio’s routes again:
[simterm]
$ istioctl proxy-config routes -n istio-system istio-ingressgateway-8459df68cb-bh76b NOTE: This output only contains routes loaded via RDS. NAME DOMAINS MATCH VIRTUAL SERVICE http.80 app-1-test-common-ingress.example.com /* backend-app-1-virtualservice.backend-app-1-ns http.80 app-2-test-common-ingress.example.com /* backend-app-2-virtualservice.backend-app-2-ns
[/simterm]
And try to access URL of the app-1:
[simterm]
$ curl -I https://app-1-test-common-ingress.example.com HTTP/2 200 date: Mon, 12 Apr 2021 09:52:13 GMT content-type: text/html server: istio-envoy
[/simterm]
Cool, everything is working now. Now, we have one shared AWS ALB created with Kubernetes Ingress which sends traffic via Istio Ingress Gateway to two dedicated applications and dedicated namespaces.
Drop resources, but leave Ingress with the shared LoadBalancer for further tests:
[simterm]
$ kubectl delete -f app1.yaml $ kubectl delete -f app2.yaml
[/simterm]
And let’s go to the Helm chart.
Istio: shared Ingress/AWS ALB, and Helm
Planning: conditions
The following task is non-trivial: we have a Dev EKS cluster, and a Production EKS кластер.
On the Dev cluster, we already have Istio and it’s testing there, but not for all applications running there. On the Production cluster, we didn’t install Ietio yet but will do it in the future.
Also, all our applications now are leaving in dedicated namespaces, and have dedicated Ingress/AWS ALB.
On the Dev cluster I’d like to change this approach, and use now shared Ingress/ALB and send all traffic via Istio Ingress Gateway, but on the Production – leave it as it is, i.e. each application will use its own ALB, and for now, they will send traffic to an application’s Service directly, and in the future, when we’ll implement Istio on Production, we’d need to change Ingress to use Istio Ingress Gateway.
But still, and Dev and Production applications may use or not Istio, as it’s in the “early-stage” yet in our architecture, and some time settings for application will differ, so need to create such a Helm chart, which could configure necessary resources and Ingress settings.
So, at first, need to Т.е. во-первых – надо determine will a chart uses its own Ingress or a shared one? The second thing is if it’s using its own Ingress, which backend will it use – Istio Ingress Gateway or a common Service of an application? And if it will use Istio Ingress Gateway, the chart have to create Gateway and VirtualService resources.
Thus, our template must accept three configuration schemes:
- use a shared LoadBalancer and Istio: Istio Ingress Gateway LoadBalancer and Istio Ingress Gateway Service as a
backend
- own/dedicated LoadBalancer for an application and Istio Ingress Gateway: create an Ingress for an application with the Istio Ingress Gateway Service as
backend
- own/dedicated LoadBalancer for an application but without Istio: create an Ingress for an application with a Services of the application as a
backend
I.e:
- an Ingress shared or own?
- if shared – do not create an Ingress resource
- if own, then create an Ingress but with choice of the backend – Istio Ingress Service, or a common Kubernetes Service of an application
- a backend for an Ingress – Istio or a Service of an application?
- if Istio, then need to set the
backend
asserviceName: istio-ingressgateway
, and create Gateway and VirtualService resources - if a Service of an application, then set the
baсkend
asserviceName: <APPNAME>-service
, and do not create Gateway and VirtualService resources
- if Istio, then need to set the
To achieve this goal, let’s use a values.yaml
– we have dedicated files for Dev and Production environments.
In those files, we can define two parameters – istio.enabled
and ingress.enabled
.
This will give us an ability to set for Dev ingress.enabled=false
, and do not create an Ingress, but set istio.enabled==true
and create Gateway and VirtualService, that will be used by the shared Ingress/ALB from the istio-system
namespace.
And for the Production, we will able to set ingress.enabled=true
and istio.enabled=false
, then the chart will be deployed by the currently used scheme, and later, when we will implement Istio on the Production cluster, we will set those values as ingress.enabled=true
and istio.enabled=true
, and this will create a dedicated Ingress/LoadBalancer, which will send traffic via Istio Ingress Gateway.
Well, let’s try.
Creating Helm chart and templates
Create a new chart:
[simterm]
$ helm create app-1 Creating app-1
[/simterm]
Create directories to keep values.yaml
for Dev and Production:
[simterm]
$ mkdir -p app-1/env/{dev,prod}
[/simterm]
Delete default templates and values:
[simterm]
$ rm -r app-1/templates/* app-1/values.yaml
[/simterm]
Create own values files in those directories:
[simterm]
$ vim -p app-1/env/dev/values.yaml app-1/env/prod/values.yaml
[/simterm]
For Dev – app-1/env/dev/values.yaml
:
appConfig: name: "backend-app-1" version: "v1" url: "dev-app-1-test-common-ingress.example.com" istio: enabled: true ingress: enabled: false
Do not create an Ingress, but create Gateway and VirtualService.
Production – app-1/env/prod/values.yaml
:
appConfig: name: "backend-app-1" version: "v1" url: "prod-app-1-test-common-ingress.example.com" istio: enabled: false ingress: enabled: true
Here, an Ingress will be created as a dedicated AWS ALB, but resources for will not be created – the Ingress will send traffic directly yo a Service of the application
Create template files:
[simterm]
$ vim -p app-1/templates/ingress.yaml app-1/templates/service.yaml app-1/templates/deployment.yaml
[/simterm]
We will not define a namespace
in those templates (only for the Ingress, see below), as a namespace will be created by Helm during deployment.
Deployment
No changes in here, just some values are taken from the values.yaml
:
--- apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Values.appConfig.name }}-deploy labels: app: {{ .Values.appConfig.name }} version: {{ .Values.appConfig.version }} spec: replicas: 2 selector: matchLabels: app: {{ .Values.appConfig.name }} template: metadata: labels: app: {{ .Values.appConfig.name }} version: {{ .Values.appConfig.version }} spec: containers: - name: web-app image: nginxdemos/hello ports: - containerPort: 80 resources: requests: cpu: 100m memory: 100Mi readinessProbe: httpGet: path: / port: 80
Service, VirtualService, Gateway
Here, we will always create a Service with a condition check: if ingress.enabled==true
, then set type: NodePort
, so our LoadBalancer will be able to send traffic to a WorkerNode. If it’s false
, then use the default value ClusterIP
, so our requests will not go through additional Iptables rules but instead will be sent directly to a WorkerNode where a Pod is living (read also Kubernetes: Service, load balancing, kube-proxy, and iptables):
--- apiVersion: v1 kind: Service metadata: name: {{ .Values.appConfig.name }}-service spec: {{- if .Values.ingress.enabled }} type: NodePort {{- end }} selector: app: {{ .Values.appConfig.name }} ports: - name: http protocol: TCP port: 80
Then, will check the istio.enabled
condition, and if it’s set to the true
– a Gateway and VirtualService resources will be created:
{{- if .Values.istio.enabled }} --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: {{ .Values.appConfig.name }}-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - {{ .Values.appConfig.url }} --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: {{ .Values.appConfig.name }}-virtualservice spec: hosts: - {{ .Values.appConfig.url }} gateways: - {{ .Values.appConfig.name }}-gateway http: - match: - uri: prefix: / route: - destination: host: {{ .Values.appConfig.name }}-service port: number: 80 {{- end }}
Ingress
For the Ingress in the ingress.enabled
condition at first will check if need to create the Ingress at all, and if true – then will check which namespace to use, as if we will be using Istio, then this Ingress must be created in the istio-system
namespace, and if it’s a “common” Ingress – then in an application’s namespace.
later, with the istio.enabled
we will check where its traffic will be sent – to Istio or a common Service of the application:
{{- if .Values.ingress.enabled }} --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: {{ .Values.appConfig.name }}-alb {{- if .Values.istio.enabled }} namespace: istio-system {{- end }} annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:us-east-2:534***385:certificate/db886018-c173-43d0-b1be-b940de71f2a2" alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' external-dns.alpha.kubernetes.io/hostname: {{ .Values.appConfig.url }} spec: rules: - http: paths: - path: /* backend: serviceName: ssl-redirect servicePort: use-annotation - path: /* backend: {{- if .Values.istio.enabled }} serviceName: istio-ingressgateway {{ else }} serviceName: {{ .Values.appConfig.name }}-service {{- end }} servicePort: 80 {{- end }}
Run Helm with the --debug
and --dry-run
– no Ingress must be printed, but a Gateway with VirtualService must be present:
[simterm]
$ helm upgrade --install backend-app-1 --namespace dev-backend-app-1-ns --create-namespace -f app-1/env/dev/values.yaml app-1/ --debug --dry-run
[/simterm]
If no errors here, then deploy it to the Dev cluster:
[simterm]
$ helm upgrade --install backend-app-1 --namespace dev-backend-app-1-ns --create-namespace -f app-1/env/dev/values.yaml app-1/ Release "backend-app-1" does not exist. Installing it now. NAME: backend-app-1 LAST DEPLOYED: Mon Apr 12 18:03:08 2021 NAMESPACE: dev-backend-app-1-ns STATUS: deployed REVISION: 1 TEST SUITE: None
[/simterm]
Check an Ingress in the dev-backend-app-1-ns
namespace:
[simterm]
$ kubectl -n dev-backend-app-1-ns get ingress No resources found.
[/simterm]
Good, and we must see a Gateway and VirtualServce created:
[simterm]
$ kubectl -n dev-backend-app-1-ns get gateway NAME AGE backend-app-1-gateway 38s $ kubectl -n dev-backend-app-1-ns get virtualservice NAME GATEWAYS HOSTS AGE backend-app-1-virtualservice [backend-app-1-gateway] [dev-app-1-test-common-ingress.example.com] 65s
[/simterm]
Good!
Now, deploy a “Production”:
[simterm]
$ helm upgrade --install backend-app-1 --namespace prod-backend-app-1-ns --create-namespace -f app-1/env/prod/values.yaml app-1/
[/simterm]
Check its Ingress in the prod-backend-app-1-ns
namespace as we set do not use Istio here:
[simterm]
$ kubectl -n prod-backend-app-1-ns get ingress NAME CLASS HOSTS ADDRESS PORTS AGE backend-app-1-alb <none> * aadca942-***-49478225.us-east-2.elb.amazonaws.com 80 4m54s
[/simterm]
For the Dev environment, update the Ingress which we created at the very beginning – this is our shared Ingress. Add the external-dns.alpha.kubernetes.io/hostname
here so ExternalDNS will create a record mapped to this LoadBalancer:
... # ExternalDNS settings: https://rtfm.co.ua/en/kubernetes-update-aws-route53-dns-from-an-ingress/ external-dns.alpha.kubernetes.io/hostname: "dev-app-1-test-common-ingress.example.com" ...
Apply:
[simterm]
$ kubectl apply -f common-ingress-gateway.yaml ingress.extensions/backend-common-alb configured
[/simterm]
Check Istio Ingress Gateway routes – we must see only Dev routes for now:
[simterm]
$ istioctl proxy-config routes -n istio-system istio-ingressgateway-8459df68cb-bh76b --name http.80 NOTE: This output only contains routes loaded via RDS. NAME DOMAINS MATCH VIRTUAL SERVICE http.80 dev-app-1-test-common-ingress.example.com /* backend-app-1-virtualservice.dev-backend-app-1-ns
[/simterm]
Or in this way:
[simterm]
$ istioctl proxy-config routes -n istio-system istio-ingressgateway-8459df68cb-bh76b --name http.80 -o json | jq '.[].virtualHosts[].domains[0], .[].virtualHosts[].routes[].route.cluster' "dev-app-1-test-common-ingress.example.com" "outbound|80||backend-app-1-service.dev-backend-app-1-ns.svc.cluster.local"
[/simterm]
And try to access the application’s URLs.
Prod:
[simterm]
$ curl -I https://prod-app-1-test-common-ingress.example.com HTTP/2 200 date: Tue, 13 Apr 2021 12:47:15 GMT content-type: text/html server: nginx/1.13.8
[/simterm]
server: nginx/1.13.8
– the response received from the NGINX, and that means the request was sent via a LoadBalancer directly to the Service of the application to its Pod.
And Dev:
[simterm]
$ curl -I https://dev-app-1-test-common-ingress.example.com HTTP/2 200 date: Tue, 13 Apr 2021 12:47:18 GMT content-type: text/html server: istio-envoy
[/simterm]
server: istio-envoy
– traffic went through the shared LoadBalancer, then to the Istio Ingress Gateway, then to a sidecar container with the Envoy proxy in the Pod with the application.
And now, let’s check the third available scheme – create a dedicated Ingress, but enable Istio for it.
In the app-1/env/prod/values.yaml
change istio.enabled
to the true, ingress.enabled
we already have set to true:
appConfig: name: "backend-app-1" version: "v1" url: "prod-app-1-test-common-ingress.example.com" istio: enabled: true ingress: enabled: true
Update the setup:
[simterm]
$ helm upgrade --install backend-app-1 --namespace prod-backend-app-1-ns --create-namespace -f app-1/env/prod/values.yaml app-1/
[/simterm]
Check Istio Ingress Gateway routes again:
[simterm]
$ istioctl proxy-config routes -n istio-system istio-ingressgateway-8459df68cb-bh76b --name http.80 NOTE: This output only contains routes loaded via RDS. NAME DOMAINS MATCH VIRTUAL SERVICE http.80 dev-app-1-test-common-ingress.example.com /* backend-app-1-virtualservice.dev-backend-app-1-ns http.80 prod-app-1-test-common-ingress.example.com /* backend-app-1-virtualservice.prod-backend-app-1-ns
[/simterm]
Yup, we get a new route to the Production backend now.
Check the Ingress in the prod-backend-app-1-ns
namespace:
[simterm]
$ kubectl -n prod-backend-app-1-ns get ingress backend-app-1-alb Error from server (NotFound): ingresses.extensions "backend-app-1-alb" not found
[/simterm]
Good, and check the istio-system
namespace:
[simterm]
$ kubectl -n istio-system get ingress backend-app-1-alb NAME CLASS HOSTS ADDRESS PORTS AGE backend-app-1-alb <none> * aadca942-***-1554475105.us-east-2.elb.amazonaws.com 80 8m52s
[/simterm]
Try with curl
:
[simterm]
$ curl -I https://prod-app-1-test-common-ingress.example.com HTTP/2 200 date: Tue, 13 Apr 2021 13:14:34 GMT content-type: text/html server: istio-envoy
[/simterm]
server: istio-envoy
– great! Our traffic now is going through the Istio.
Istio and ExternalDNS
And the last thing is to use ExternalDNS with Istio.
Currently, when using a shared Ingress and LoadBalancer, we can specify a Host/URL in the annotations of this Ingress, but this Ingress will not affected by Helm charts of the applications as it’s created from a dedicated manifest common-ingress-gateway.yaml
.
So, to be able to create DNS records during application deployments we will need to update the annotations of the shared Ingress, and this leads to ad additional automation and complexity.
Instead, we can configure ExternalDNS in a way, when it will use not only Ingresses annotations, but Istio’s resources as well.
Let’s update its Deployment and add the --source=istio-gateway
and/or --source=istio-virtualservice
, see the documentation here>>>:
... containers: - args: - --log-level=info - --log-format=text - --events - --policy=upsert-only - --provider=aws - --registry=txt - --interval=2m - --source=service - --source=ingress - --source=istio-gateway - --source=istio-virtualservice ...
From the common-ingress-gateway.yaml
remove the line:
... external-dns.alpha.kubernetes.io/hostname: "dev-app-1-test-common-ingress.example.com" ...
Now, a hostname will be set in in a Gateway and/or a VirtualService, from the spec.servers.hosts
for the Gateway or spec.hosts
for VirtualService.
Also, check if ExternalDNS able to read Istio resources in its ClusterRole external-dns
:
... - apiGroups: - networking.istio.io resources: - virtualservices - gateways verbs: - get - list - watch ...
ExternalDNS doesn’t create records for Istio Gateway and VirtualService: “No endpoints could be generated”
But here I’ve faced with an issue.
Enable --debug
in the ExternalDNS Deployment and check its logs:
[simterm]
... time="2021-04-14T12:53:05Z" level=debug msg="Adding event handler for Istio VirtualService" time="2021-04-14T12:53:05Z" level=debug msg="Adding event handler for Istio Gateway" time="2021-04-14T12:53:05Z" level=debug msg="Adding event handler for service" time="2021-04-14T12:53:05Z" level=debug msg="Adding event handler for ingress" ... level=debug msg="No endpoints could be generated from ingress istio-system/backend-common-alb" ...
[/simterm]
Handlers for istio were created, so ExternalDNS is able to see Istio’s updates, but it can not create new records.
This happens due to the fact that Istio Ingress Gateway Service is created with the NodePort Service’s type to make it working with the shared LoadBalancer instead of the default LoadBalancer, and ExternalDNS can not parse a VirtualService to determine Kubernetes endpoints for an external Ingress, created from the common-ingress-gateway.yaml
manifest.
We can “fix” it with a small hack: specify ALB’s URL directly in the VirtualService annotations.
Still, remember that in some cases a VirtualService will be created with a dedicated Ingress/ALB, and then we don’t need to add this annotation.
Therefore, add a new condition to the VirtualService manifest – if not .Values.ingress.enabled
:
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: {{ .Values.appConfig.name }}-virtualservice {{- if not .Values.ingress.enabled }} annotations: external-dns.alpha.kubernetes.io/target: {{ .Values.istio.externalURL }} {{- end }} ...
And set a value for the istio.externalURL
in the app-1/env/dev/values.yaml
file – it will be persistent enough and will be used for the Dev environment only:
... istio: enabled: true externalURL: "aadca942-istiosystem-backe-3ee2-700661912.us-east-2.elb.amazonaws.com" ...
Altogether, a Service, Gateway, and VirtualService will have such a manifest:
--- apiVersion: v1 kind: Service metadata: name: {{ .Values.appConfig.name }}-service spec: {{- if .Values.ingress.enabled }} type: NodePort {{- end }} selector: app: {{ .Values.appConfig.name }} ports: - name: http protocol: TCP port: 80 {{- if .Values.istio.enabled }} --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: {{ .Values.appConfig.name }}-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: {{ .Values.appConfig.name }}-virtualservice {{- if not .Values.ingress.enabled }} annotations: external-dns.alpha.kubernetes.io/target: {{ .Values.istio.externalURL }} {{- end }} spec: hosts: - {{ .Values.appConfig.url | quote }} gateways: - {{ .Values.appConfig.name }}-gateway http: - match: - uri: prefix: / route: - destination: host: {{ .Values.appConfig.name }}-service port: number: 80 {{- end }}
Deploy it and check ExternalDNS logs:
[simterm]
... time="2021-04-14T13:05:00Z" level=info msg="Desired change: CREATE dev-app-1-test-common-ingress.example.com A [Id: /hostedzone/Z30KLN6M3D0LB6]" time="2021-04-14T13:05:00Z" level=info msg="Desired change: CREATE dev-app-1-test-common-ingress.example.com TXT [Id: /hostedzone/Z30KLN6M3D0LB6]" ...
[/simterm]
Try again with curl
:
[simterm]
$ curl -I https://dev-app-1-test-common-ingress.example.com HTTP/2 200 date: Wed, 14 Apr 2021 13:21:11 GMT content-type: text/html server: istio-envoy
[/simterm]
All done.