But when my team’s backend developer performed the aws eks update-kubeconfig command to configure the kubectl tool on his macOS and made the first attempt to connect to this cluster – he got the next error:
root@ip-10-0-42-255:~# kubectl get nodes
error: You must be logged in to the server (Unauthorized)
Both users – mine, as I created this cluster using my own IAM user, and our backend developer – have FullAdmin grants on their AWS accounts – what can be wrong here?
Well, let’s dive a bit into EKS and AWS authentication and authorization process.
In this topic will we speak about:
authentication vs authorization
modules and authentification process in AWS EKS with aws-iam-authenticator and AWS AIM
And in the next part – we will see what is RBAC in Kubernetes and how it is used for users authorization.
Authentication vs Authorization
At first – let’s quickly check the difference between a user’s authentification vs authorization.
Authentification is a process when a client has to prove a server that he is who he claims to be.
For example, after we create an EKS cluster – we started Worker Nodes instances. They have to make a call to the EKS’ API-server to connect to the cluster.
During that – the API server has to have an ability to check what is the client asking him to act as a Worker Node and it has permissions for this.
For this Kubernetes has authentication modules or authenticators: when API server receives a request from clients, whether it a client like a kubectl tool, a new Worker Node, or just an HTTP-request made with curl – Kubernetes will ask one of its configured authenticators to validate this client.
All available modules can be found in the documentation, but for now, we are interesting in one that is used in the AWS Elastic Kubernetes Service – the aws-iam-authenticator which is used with AWS IAM to check users.
After our client will pass the authentification process – Kubernreets has to check if this user can perform exactly an action called by the client, for example – if it has permissions to execute kubectl get pods command.
If recall AWS IAM policies – we can see permissions that are granted for a particular API-calls, for example:
Here in the Action, we are setting permissions for some user who will use this policy. If this user will try to execute the “s3:DeleteObject” call – AWS will decline it as this request wasn’t specified in the policy for this user (and the whole AWS is working via API-calls in the same way as it is in the Kubernetes).
Kubernetes will pass a user who already was validated with an authentication module to an authorization module to check for its permissions and after that API server will decide if it has to perform an exaction requested by the client.
In the post below we will see how the authentification process is working in AWS EKS with AWS IAM and in the next one – will speak about the authorization and RBAC in Kubernetes.
AWS EKS Authentication and Authorization
The process is greatly described in the picture below:
AWS EKS IAM Authentication
For the authentification AWS EKS uses web-tokens – see the Webhook Token Authentication: a client will pass specially formatted token which includes a user’s identificator.
In the case of AWS EKS, this identification can be an IAM-user’s or IAM-role’s ARN (Amazon Resource Name).
Kubernetes authenticator in its turn will pass the identificator extracted from this token to the AWS IAM service to check if the user specified in the token is really the one he is clams to be by using the AWS IAM Authenticator service which is running on the EKS Control Plane.
E.g. the process is:
a client makes q requests to an API server passing an access-token with the user’s identificator
API servers pass this token to another Kubernetes Control Plane’s service – aws-iam-authenticator
aws-iam-authenticator asks AWS IAM service and passes this identificator to check if this is valid user and do he has permissions to access the EKS cluster
AWS IAM makes internal authentification check by using a secret key tied with the ACCESS_KEY passed in a token as a user’s identificator
AWS IAM makes internal authorization check by checking IAM policies tied to this user – a user without API calls to eks::* resources must be declined
aws-iam-authenticator goes back to the Kubernetes cluster to check via aws-auth ConfigMap (will see it soon in the AWS EKS aws-auth ConfigMap) – do this user has permissions to access this cluster
aws-iam-authenticator returns approve/decline response to the API server
API server in its turn will or perform actions requested by the user – or will return the “You must be logged in to the server (Unauthorized)” message
The aws-iam-authenticator itself can be used on both sides – just a server’s side it is started as aws-iam-authenticator server, and on the client – aws-iam-authenticator token -i:
But in my current case, the client (kubectl) as configured by issuing the aws eks update-kubeconfig command and uses AWS CLI instead of the aws-iam-authenticator as on the picture above(see more at AWS CLI vs aws-iam-authenticator).
Let’s use the kubectl on a local workstation as a client to see the whole process.
So, kubectl first will check the ~/.kube/config file where it will take an EKS cluster’s API-server URL:
Which describes the command to be used to get a token for EKS authentification by using the AWS CLI command – aws eks get-token.
It will be interesting to play a bit with the WS API authentification, some quickly googled examples can be found in the Developer Guide for AWS S3, and if will have time – maybe will try to do the whole work just by using the curl utility, as under the hood AWS CLI just issues an API-requests to the AWS core in the same way as we can see it in the kubectl and Kubernetes API-endpoint.
Now – let’s use the command specified about in the command section and let’s pass arguments which are set in the args:
Okay, so we already checked how the authentification is working from the client-side – what about AWS EKS then?
AWS EKS aws-auth ConfigMap
Then, the “authentification magic” happens here: now the aws-iam-authenticator has to:
ask the AWS IAM service – if it has such a user and do it has permissions, e.g. must perform the authentification
after this aws-iam-authenticator has to pass this user to the Kubernetes API-server, and then Kubernetes will perform the user’s authorization – do this user has permissions to access our cluster and which commands it can use – here the aws-auth ConfigMap will be used
Here in the arn:aws:iam::534***385:role/mobilebackend-dev-eks-0-wn-stack-NodeInstanceRole-15NNFZK6WW4IG line we added the role ARN which is used as the Instance Role for EC2 with our Worker Nodes which will be checked for permissions.
Adding a new user to the cluster’s ConfigMap
Finally, let’s go back to the very beginning of this post: why another IAM-ser got the “You must be logged in to the server (Unauthorized)” error?
Well, it’s kind of obvious from what we saw in the ConfigMap – because there is no such a user and thus – Kubernetes can’t check its permissions (actually – because this user has.
Done – our user arn:aws:iam::534***385:user/yaroslav now is able to perform any actions in the cluster.
“root” aka Cluster creator
The final thing I’d like to speak about is the Creator user: the IAM entry, which was used to create a cluster will become its super-admin.
But the problem is that you are not able to see anywhere – that’s why I was confused – why the arn:aws:iam::534***385:user/arseniy user had full access to the cluster, while the arn:aws:iam::534***385:user/yaroslav – hasn’t.
But still – you can find the Creator by checking the CloudTrail by the CreateCluster API-call to the AWS:
The Support’s answer about any way to see the “root” user was the next:
At this time, the IAM entity that creates the cluster becomes the first cluster administrator. This entity is passed to
the master nodes and is not visible from the aws-auth ConfigMap. This is similar to the root user for your AWS account
in that it has the system:masters permission.