AWS: Fargate – capabilities, comparison with Lambda/EC2 and usage with AWS EKS

By | 05/07/2023

AWS Fargate is another serverless solution from Amazon that takes over infrastructure management, saving the user the need to spend time configuring EC2 instances, operating systems, container management systems, etc.

In general, when I was getting acquainted with Fargate, I found a great video from AWS re:Invent 2022, which very well explained (and showed) the Shared Responsibility model in AWS – for which parts of the system Amazon is responsible, and for which the user is responsible, so I highly recommend to review – AWS re:Invent 2022 – A close look at AWS Fargate and AWS App Runner.

If you look at the diagram from this video, Fargate’s role is well represented there:

That is, AWS takes care of everything related to the servers and the operating system and its components, while we only have to create and run a container.

At the same time, AWS Fargate can be used together with AWS Elastic Container Service or AWS Elastic Kubernetes Service, and today we will see how to make it works with AWS EKS.

AWS Fargate vs AWS Lambda

The very first question I had when I started reading about Fargate was why if there is already AWS Lambda? What is the difference? By the way, the video also talks about AWS App Runner – another serverless service from Amazon, but not about it now (and by the way we also have Knative, although it is not about AWS).

Functionality

So, conceptually – AWS Fargate is CaaS, i.e. Container as a Service, while AWS Lambda is FaaS, i.e. Function as a Service: to work with Fargate, you need to have a built docker image (or any other corresponding Open Container Initiative specification), while to work with AWS Lambda you only need the code – Lambda itself will “pack” it into a container and run it.

Also, with Fargate you have to configure the autoscaling of containers, while Lambda does it automatically. In addition, containers in Fargate will not scale to zero when there is no work – for this, you need to stop Fargate tasks yourself (or simply scale pods to zero in Kubernetes), while Lambda functions will be stopped as soon as events which triggered the launch of these functions will stop coming to them.

Cost and payment for the service

At the same time, both services have the same payment model – pay-as-you-go, that is, you pay only for the time when your container or function is executed, although there are differences: in Fargate, payment is charged exactly for the CPU/RAM consumed per second, whereas with Labmda you pay for each function call and for the time it runs. See AWS Fargate Pricing.

Use Cases

Fargate is well-suited for running long-running tasks where you have more options to customize your work environment and you’ll have fewer restrictions on CPU/RAM, disk storage, and don’t have such significant limits on the size of data you can send/receive.

Lambda, on the other hand, allows you to run code faster (because you don’t need to build a container image), auto-scaling out of the box, monitoring, and is good for short-term tasks.

AWS Fargate vs AWS EKS EC2 Node Groups

During the launching of the first pods in EKS using Fargate, another question arose – well, what about EC2?

In this comparison, Fargate has the following advantages:

  • faster scaling
  • can be a more cost-effective solution than EC2
  • no need for security patches (although Managed Node Groups apparently also install patches themselves)

Disadvantages of Fargate:

  • less control over the infrastructure
  • sometimes an EC2 solution can be more cost-effective
  • restrictions on CPU/Memory and instance types (for example, it is not possible to use GPU)

With EC2, you have more control over the infrastructure and instance types (GPU, network, etc.), but it requires more engineering work (startup, server maintenance, monitoring), and you pay for the servers regardless of whether they run on them work or not.

In general, you can simply have both Node Groups and Fargate instances for different pods in one EKS cluster.

Amazon EKS and AWS Fargate

So, in EKS, our Pods have to start on some servers. Usually, NodeGrops (Managed and/or Self-managed ) are used for this, which are regular AWS EC2 instances, but instead of virtual machines we can use AWS Fargate, see Amazon EKS nodes.

Which Pods will be launched is configured in Fargate profiles, which are part of your EKS cluster. EKS itself integrates with Fargate through controllers that are built into the EKS service and work on its control plane. In addition, Fargate has a separate scheduler for running pods – the fargate-scheduler (as opposed to the default-scheduler, which is responsible for running pods on EC2 instances).

See AWS Fargate.

Fargate in EKS considerations

When planning to use AWS Fargate with your EKS, keep in mind that:

See all in AWS Fargate considerations.

Creating an EKS cluster

Here we will do everything by “clickops”, and another time will deploy EKS using Terraform or AWS CDK.

EKS cluster IAM role

First, we need an AIM role through which the future cluster will communicate with Amazon services, see Amazon EKS cluster IAM role.

Go to AIM, click Create role, leave “AWS service” in Trusted entity type, and select EKS – Cluster from the list in Use case:

On the Add permissions page, leave it as default, and click Next:

Next, set the name of the role and press Create Role:

Створення VPC та Subnets

I haven’t created networking in AWS Console for a few years… Especially by hand.

Okay, what do we need:

  • VPC
  • divide it into several subnets in different AvailabilityZones
    • part – public, with Internet Gateway – for any AWS LoadBalancer
    • some are private, Kubernetes Pods will live here, and we will create a NAT Gateway for them
  • and there was also something with SecurityGroups

Let’s go.

And – oh… Here we are) The VPC creation interface has been remastered really cool…

Go to the VPC, create a new network, select VPC and more – I’m just curious how it is, and it looks really convenient – now, we can create everything at once, instead of spending half an hour clicking on the VPC dashboard, and then thinking about where we messed up in the route tables.

So, create a VPC with subnets in each Availability Zone, create public subnets with an Internet Gateway, and private subnets with a NAT Gateway in each Availability Zone (more expensive, but more reliable, if this is done for Production). Also, let’s add a VPC S3 endpoints – it is not needed now, but in general, it is a very useful thing in terms of security and cost-effectiveness:

Below check that DNS hostnames and DNS resolution are enabled – this is a requirement for Fargate to work (described in the same AWS Fargate considerations if you missed it):

Click Create.

Well, they did it really cool!

Wait for the creation of the resources:

Creating an EKS cluster

Go to the Elastic Kubernetes Service, click Add cluster > Create:

Specify the name of the cluster, select the IAM role create above, and click Next (the Secrets encryption is something new, need to test it somehow):

Next, select our VPS – the subnets will be pulled up automatically, select a SecurityGroup below, for now, you can default from our VPC:

Next, configure access to our cluster.


AWS Fargate Pod hangs in Pending status

Looking ahead a little – about a possible problem. After setting up almost everything, the test Pod hung in the Pedning state with the following messages:

[simterm]

$ kubectl describe pod nginx
Name:                 nginx
Namespace:            default
...
Labels:               eks.amazonaws.com/fargate-profile=fargate-test-eu-central-1-profile
...
Status:               Pending
...
  Warning  LoggingDisabled   11m    fargate-scheduler  Disabled logging because aws-logging configmap was not found. configmap "aws-logging" not found
  Warning  FailedScheduling  8m48s  fargate-scheduler  Pod provisioning timed out (will retry) for pod: default/nginx

[/simterm]

I have checked everything, and googled it – Fargate profiles, SecurityGroup EKS, NAT Gateways in subnets – everything was correct everywhere. WTF?

It turned out that when I wrote a little below in this post:

Actually, of course, it is desirable to disable public access to the API, and go through a private endpoint, for example, through a VPN. But for now, let’s fill in both varviants – public and private, just will set a limit on IP.

I ran further and did not switch access to the cluster from the default value of Public to Public and private, which led to the problem.

And this is stated in the first lines of the documentation, and I even quoted it when describing the process:

Without the private endpoint enabled, the CIDR blocks that you specify for public access must include the outbound sources from your VPC.

But I just missed switching the option and spent half an hour trying to understand whose problem it was.

Okay, let’s move on – this time it’s right.


So, of course, it is desirable to disable public access to the Kubernetes API and go through a private endpoint, for example, through a VPN. But for now, let’s include both options – public and private, just will set a limit on my home IP.

Find the IP:

[simterm]

$ curl ifconfig.me
217.***.***.253

[/simterm]

Add it to the allowed:

Next, configure logging. Here, too, it is desirable to include if not all, then at least the logs of the API server, Audit or Authenticator, and Scheduler:

In the next step, choose Addons, and can enable GuardDuty:

On the next page, I don’t see what can be changed – leave it, and click Next:

And finally, the last thing is a review, and create a cluster:

It used to take 15-20 minutes, but there was news that Amazon has sped up the process.

kubectl configuration

To check that everything is fine with the cluster and for further testing, add a next context to your  ~/.kube/config.

Run AWS CLI with the update-kubeconfig command:

[simterm]

$ aws --profile setevoy eks update-kubeconfig --name fargate-test-eu-central-1-cluster --alias setevoy-fargate-test-eu-central-1-cluster
Added new context setevoy-fargate-test-eu-central-1-cluster to /home/setevoy/.kube/config

[/simterm]

Check:

[simterm]

$ kk get pod --all-namespaces
NAMESPACE     NAME                      READY   STATUS    RESTARTS   AGE
kube-system   coredns-cbbbbb9cb-2hhx2   0/1     Pending   0          18h
kube-system   coredns-cbbbbb9cb-4xf2w   0/1     Pending   0          18h

[/simterm]

So far, we have only two pods with CoreDNS, currently in the Pending state because neither Worker Nodes nor Fargate profiles are available.

Let’s go to Fargate.

Running AWS Fargate

Now that we have an EKS cluster, it’s time to connect Fargate.

See the documentation on the Getting started with AWS Fargate using Amazon EKS, and the first thing we pay attention to is Security Groups on Worker Nodes:

If you restrict access to the public endpoint of your cluster using CIDR blocks, we recommend that you also enable private endpoint access. This way, Fargate pods can communicate with the cluster. Without the private endpoint enabled, the CIDR blocks that you specify for public access must include the outbound sources from your VPC.

In our case, there are no Node Groups, so we can continue.

EKS Pod execution IAM Role

First, we need to add a role to the AIM that will allow Kubernetes Pods on Fargate to communicate with Amazon, see Amazon EKS pod execution IAM role.

Go to AIM > Roles > Create role.

This time we choose EKS – Fragate pod :

Click Next, on the Add permissions page leave it as it is (you can check the permissions here), go further, enter a name, and click Create role:

After creation, find the Role, go to the Trust relationships tab, and edit the role:

Describe a new access policy for this role:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Condition": {
         "ArnLike": {
            "aws:SourceArn": "arn:aws:eks:eu-central-1:26***286:fargateprofile/fargate-test-eu-central-1-cluster/*"
         }
      },
      "Principal": {
        "Service": "eks-fargate-pods.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Click on the Update policy, and proceed to the creation of a Fargate profile.

Creating Fargate profile

The Fargate profile describes exactly which Pods from the Kubernetes cluster will run in Fargate, see AWS Fargate profile.

To create a profile, go to the EKS, choose your cluster, and on the Compute tab at the bottom find the Add Fargate profile:

Set a name, the AIM role will be automatically substituted as well as the private subnets of our VPC:

Pod selectors describe which Pods from which namespaces will be launched with this Fargate profile.

Now for the test let there be all of them, but in general, it is possible to create different profiles for different types of Pods with a selection by namespaces and/or labels that are added to the pods:

Next, check that everything is correct and create a profile:

Creation took 5 minutes.

Running Kubernetes Pods at Fargate

So, for now, we only have two Pods with the CoreDNS service, which are in Pending status.

In order for CoreDNS Pods to start in Fargate, edit their Deployment and remove the annotation eks.amazonaws.com/compute-type: ec2:

 

And check in a minute or two:

[simterm]

$ kubectl -n kube-system get pod
NAME                       READY   STATUS    RESTARTS   AGE
coredns-75694977b5-m7smc   0/1     Pending   0          5m48s
coredns-75694977b5-tgwdl   1/1     Running   0          50s

[/simterm]

The first one is running, good.

And after launching the first Pods, on the Compute tab of our cluster, we should see the Fargate nodes in the EKS cluster:

While the second Pod with CoreDNS starts, let’s add a regular test Pod with Nginx to see how it works again:

[simterm]

$ kubectl run nginx --image=nginx
pod/nginx created

[/simterm]

And in a couple of minutes check the Pods:

[simterm]

$ kubectl get pod --all-namespaces
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
default       nginx                      1/1     Running   0          2m27s
kube-system   coredns-75694977b5-r8d6b   1/1     Running   0          46s
kube-system   coredns-75694977b5-tgwdl   1/1     Running   0          5m52s

[/simterm]

And Fargate nodes:

Regarding the event “Disabled logging because aws-logging configmap was not found ” – you can configure logging separately, see Fargate logging, but now this does not affect Pods startup.

Another interesting thing to know is that the IAM role for our pods is added to aws-auththe cluster’s ConfigMap:

[simterm]

$ kk -n kube-system get cm aws-auth -o yaml
apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      - system:node-proxier
      rolearn: arn:aws:iam::264***286:role/AmazonEKSFargatePodExecutionRole-fargate-test-eu-central-1
      username: system:node:{{SessionName}}

[/simterm]

That seems to be all for now, and you can try to use it instead of regular EC2.

Useful links