AWS: IAM users keys rotation, EC2 IAM Roles and Jenkins

By | 05/30/2019
 

Today I checked our IAM-users and “suddenly” recalled that it’s good to update their credentials sometimes:

Well, that’s good to do but here is a question: it’s simple enough to set an expire for keys in IAM, but what to do with all scripts which are used in our Jenkins and which are using those IAM ACCESS/SECRET keys?

For example – our backend provisioning is done from Jenkins using Ansible cloudformation module which uses IAM user with IAM policy with EC2/RDS/CloudFormation etc Allow rules.

Our data analytics has their own jobs which makes some ETL jobs and stores results in AWS S3 buckets using IAM user with a policy which allows access to specific S3 buckets.

Now what – need to update all ACCESS/SECRET keys in all those jobs every time they will be rotated in IAM?

As a solution – Hashicorp Vault could be used, which can grant access tokens to those scripts/jobs. But at first – it’s some things (and time) to set up Vault itself, its backups etc and as the second thing – it’s a piece of work to do with scripts/Jenkins jobs to update them to use Vault instead of variables with ACCESS/SECRET keys.

A better solution can be to use IAM roles for EC2 instead, as any AWS SDK will look for it during authentication, for example, boto3 documentation says:

  1. Passing credentials as parameters in the boto.client() method
  2. Passing credentials as parameters when creating a Session object
  3. Environment variables
  4. Shared credential file (~/.aws/credentials)
  5. AWS config file (~/.aws/config)
  6. Assume Role provider
  7. Boto2 config file (/etc/boto.cfg and ~/.boto)
  8. Instance metadata service on an Amazon EC2 instance that has an IAM role configured.

So – let’s try to:

  1. create an IAM role with necessary policy
  2. attach it to an EC2
  3. use AWS CLI without configuration to test access

Create an IAM role

Create a role, chose EC2 type:

Attach any policy, here is AmazonRoute53ReadOnlyAccess for example:

Save your new role:

Run EC2

Create an EC2 instance and set the IAM role created above:

Testing

Log in to the instance:

[simterm]

$ ssh [email protected] -i setevoy-testing-eu-west-1.pem

[/simterm]

Check IAM in the instance meta-data:

[simterm]

root@ip-172-31-42-77:/home/admin# curl http://169.254.169.254/latest/meta-data/iam/info
{
  "Code" : "Success",
  "LastUpdated" : "2019-05-30T10:54:26Z",
  "InstanceProfileArn" : "arn:aws:iam::534***385:instance-profile/ec2-example-role"
...

[/simterm]

Install AWS CLI:

[simterm]

root@ip-172-31-42-77:/home/admin# apt update && apt -y install awscli

[/simterm]

And get Route53 hosted zones without any ACCESS/SECRET keys configuration:

[simterm]

root@ip-172-31-42-77:/home/admin# aws route53 list-hosted-zones --output text
HOSTEDZONES     33C2D264-***-***-3052BEA607A9    /hostedzone/Z30***LB6      example.com. 104
CONFIG  DME sites       False
...

[/simterm]

“It works!” (c).

Jenkins

Let’s go further and check if this will work on Jenkins, as at first – Jenkins itself is running in a Docker container, and as a second thing – its jobs like Ansible tasks also are started using Docker containers “inside” Jenkins.

Update existing EC2 instance:

Attach role created previously:

Create own Docker image with AWS CLI – create simple Dockerfile:

FROM python:3.7-stretch
RUN apt-get update -y
RUN pip install awscli

Build image:

[simterm]

root@jenkins-dev:/opt/jenkins# docker build -t setevoy/awscli .

[/simterm]

Check it:

[simterm]

root@jenkins-dev:/opt/jenkins# docker run -ti setevoy/awscli aws --version
aws-cli/1.16.168 Python/3.7.3 Linux/4.9.0-8-amd64 botocore/1.12.158

[/simterm]

Go to Jenkins and create a test-job.

Here is Jenkins Pipeline used:

node {
    docker.image('setevoy/awscli:latest').inside('-v /var/run/docker.sock:/var/run/docker.sock') {
        stage('List zones') {
            sh "aws route53 list-hosted-zones --output text"
        }
    }
}

Run this job:

Great!

And let’s check if unauthorized API-call will work.

In our policy only AmazonRoute53ReadOnlyAccess is allowed – let’s try to execute S3 list-buckets:

node {
    docker.image('setevoy/awscli:latest').inside('-v /var/run/docker.sock:/var/run/docker.sock') {
        stage('List zones') {
            sh "aws s3api list-buckets"
        }
    }
}

Run:

+ aws s3api list-buckets
An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied

AccessDenied – nice!

But there is one issue using EC2 IAM Roles as a jenkins jobs authorization mechanism: actually, its EC2 instance will get access to every resource under your AWS account.

I.e. if somebody will get SSH access to this host – he will be able to access any other resource in our AWS.

From one hand – this can be eliminated by using EC2 spot-instances as Jenkins workers with dedicated restricted policies attached.

From another hand, if somebody will get access to your Jenkins – you’ll have big problems anyway, so just do not forget to keep it upgraded and restrict access to it via authorizations and AWS Security Groups.


On April 9, 2019, security researcher Jaikey Sarraf alerted Matrix to existing vulnerabilities in Jenkins, which Matrix said it used for continuous integration. “The version of Jenkins we were using had a vulnerability (CVE-2019-1003000, CVE-2019-1003001, CVE-2019-1003002) which allowed an attacker to hijack credentials (forwarded ssh keys), giving access to our production infrastructure.”

https://www.infosecurity-magazine.com/news/matrix-compromised-through-known-1/