Let’s proceed with our Kubernetes journey.
- Kubernetes: part 1 – architecture and main components overview
- Kubernetes: part 2 – a cluster set up on AWS with AWS cloud-provider and AWS LoadBalancer
In this part we will start working with AWS Elastic Kuberneters Service (EKS) – its short overview, then will create Kubernetes Control Plane, CloudFormation stack with Worker Nodes, will spin up a simple web-services and will add a LoadBalancer.
Elastic Kubernetes Service – an overview
AWS EKS is a Kubernetes cluster where its core – Control Plane – will be managed by AWS itself thus freeing a user from needless headache.
- Control Plane: managed by AWS, consists of three EC2 in different Availability Zones
- Worker Nodes: a common ЕС2 in AutoScaling group, in a customer’s VPC, managed by the user
A network overview:
For networking – the amazon-vpc-cni-k8s plugin is used which allows using of AWS ENI (Elastic Network Interface) and a VPC’s network space inside of a cluster.
Also, AWS will manage Kubernetes minor upgrades, i.e. 1.11.5 to 1.11.8, but major upgrades still must be done by a user.
Preparing AWS environment
To create an EKS cluster firt we need to create a dedicated VPC with subnets, configure routing and add an IAM role for a cluster authorization.
Go to the IAM, create a new role with EKS type:
Permissions will be filled by AWS itself:
Next, have to create VPC with 4 subnets – two public for LoadBalacner and two private – for Worker Nodes.
Create a VPC:
Go to the SecurityGroups, create a new one for the cluster:
Add desired rules, here just an Allow All to All example:
Create an IGW which will be used to route traffic from public subnets:
Attach it to the VPC:
Pods will use IPs from a subnet allocated (see the amazon-vpc-cni-k8s), thus those subnets have to have enough address space.
Create a first public subnet using 10.0.0.0/18 block (16384 addresses):
A second public subnet using 10.0.64.0/18 block:
In public subnets – enable auto-assign public IPs to EC2s:
Similarly, add two private subnets:
In a public subnet create a NAT Gateway – it will be used to route traffic from private subnets:
And configure routing here:
Now, need to create two Route tables – for public and for private subnets.
Public route table
Create public subnets route table:
Edit routes – set route to the 0.0.0.0/0 via IGW created above:
Switch to the Subnet association – attach two public subnets to this RTB:
Private route table
In the same way, create RTB for private subnets:
Add another route to the 0.0.0.0/0 but via NAT GW instead of IGW:
Go back to your subnets – Edit route table association:
Attach our private RTB to the private subnets so they will use NAT GW:
Attach our public RTB to the private subnets so they will use Internet GW:
To test if this VPC is working – run two EC2 instances.
First in the public subnet:
Set Security Group:
Add another EC2, in the private subnet:
Do not forget about SG.
And try to ping it from the first instance (as we can’t ping instances in a private networks from the Internet):
If no reply to the ping – check you Security Groups and Route tables at first.
And we are done here – time to start with EKS itself.
Elastic Kubernetes Service
Create a Control Plane
Go to the EKS and create master-nodes – click the Create cluster:
Set name, chose IAM role created at the very beginning:
In subnets chose private subnets only and set SecurityGroup created above:
Note: actually, despite the fact that EKS says “subnets for your Worker Nodes – they also will be used in case of using services like AWS Load Balancer, which needs to use Public subnets. So – you can chose all subnets here: EKS will choose Public subnets for ALB, and Private – for EC2.
Enable logs if need:
And create the cluster:
Create Worker Nodes
While the Control Plane is in the provisioning state – let’s create a CloudFormation stack for the Worker Nodes.
Can take an existing template from AWS – https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-nodegroup.yaml.
Go to the CloudFormation > Create stack:
As our Worker Nodes will be placed in the private subnets open this template in the Designer:
AssociatePublicIpAddress parameter and change its value from the true to false:
Click Create stack:
Set the stack’s name, can be any, and cluster name – same, as we did it when created Master nodes, e.g. eks-cluster-manual in this example, chose SecurityGroup, fill AutoScale settings:
NodeImageId depending on a region (check the documentation for an up-to-date list):
|Region||Amazon EKS-optimized AMI||with GPU support|
|US East (Ohio) (
|US East (N. Virginia) (
|US West (Oregon) (
|Asia Pacific (Hong Kong) (
|Asia Pacific (Mumbai) (
|Asia Pacific (Tokyo) (
|Asia Pacific (Seoul) (
|Asia Pacific (Singapore) (
|Asia Pacific (Sydney) (
|EU (Frankfurt) (
|EU (Ireland) (
|EU (London) (
|EU (Paris) (
|EU (Stockholm) (
Currently the stack is creating in the London/eu-west-2, no need in GPU, thus – ami-0147919d2ff9a6ad5 (Amazon Linux).
Set this AMI ID, select VPC and two subnets:
Click Next, skip the next page and click Create stack:
While we were working with the Worker Nodes – our EKS cluster was provisioned and we can install
kubectl on a working machine.
Download an executable file:
To create its config file – use AWS CLI:
Add an alias just to make work simpler:
Although CloudFormation for Worker Nodes already ready, and EC2 instances are up and running – but we still can’t’ see them as Nodes in the Kubernetes cluster:
Download AWS authenticator:
Go to the в IAM > Roles, find ARN (Amazon Resource Name) of the role (NodeInstanceRole):
Nodes were added to the cluster – great.
Web-app && LoadBalancer
And for testing purpose – let’s create a simple web-services, for example, a common NGINX, as in the previous chapter.
To access NGINX – let’s also create a LoadBalancer in Kubernetes and AWS, which will proxy requests to the Worker Nodes:
kind: Service apiVersion: v1 metadata: name: eks-cluster-manual-elb spec: type: LoadBalancer selector: app: eks-cluster-manual-pod ports: - name: http protocol: TCP # ELB's port port: 80 # container's port targetPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: eks-cluster-manual-deploy spec: # ReplicaSet pods config replicas: 1 # pods selector selector: matchLabels: app: eks-cluster-manual-pod # Pod template template: metadata: # a pod's labeles labels: app: eks-cluster-manual-pod spec: containers: - name: eks-cluster-manual-app image: nginx
LoadBalancer in the AWS (need to wait about 5 minutes to spin up Pods and attach Nodes to the ELB):
And test the URL provided by AWS or by
kubectl get svc command:
- What Is Amazon EKS?
- Getting Started with Amazon EKS
- Managing Users or IAM Roles for your Cluster
- Kubernetes ConfigMap
- Amazon EKS Workshop
- Ten things to know about Kubernetes on Amazon (EKS)
- EKS Review
- My First Kubernetes Cluster: A Review of Amazon EKS
- Troubleshoot Kubernetes Deployments
- matchLabels, labels, and selectors explained in detail, for beginners
Also published on Medium.