DevOps

Protecting your AWS from EKS with kube2iam

I really like Kubernetes; I’ve been following almost since its inception 5 years ago and used it successfully in the past 3+ years in several projects. It isn’t without challenges (esp. around managing state) but it definitely getting better with each release. Moving to a new company, it is no wonder I introduced Kubernetes into our architecture from the get-go.

One thing different in this project though, is that I am now running Kubernetes in the cloud (AWS for now), whereas in the last two times the target use was on-prem. Using Kubernetes in the cloud alleviate some of the pains we had to deal with self-hosting it, like installation, control plane availability, etc. but it also introduces some new challenges – one such challenge is around security.

There are many ways to protect the inter-pod/service communications inside Kubernetes (maybe I’ll dedicate another post for that) – the problem here is different, it is controlling the security from the different pods and other AWS assets (like RDS, S3, other EKS clusters, etc.).

One way to handle security in AWS is to associate an AWS role with an instance. That works well in the “classic” AWS setup since different instances (or groups of instances) host different services. This is not the case when using Kubernetes, now we have multiple types of service (internal and external) running on the same node – if the node has the maximal security we’re not only violating the “least privileges principle” for our own services, we’re probably also exposing our AWS resources to 3rd party pods we’re running on the same nodes.

It seems that one possible solution is to set the node permissions to something minimal and to AWS key pairs for each service. This has 2 problems – one that the nodes need some privileges to be part of the Kubernetes cluster also it is a major headache to store and distribute the key pairs in a secure manner (e.g. making sure they don’t end up hard-coded in source code; making sure they are not propagated to pods as plain text; etc)

Luckily, there’s a better approach – that brings the IAM based approach from regular ec2 instances to the pod level. There are a couple of tools, that I know of, that support this, namely kube2iam and kiam (you can read a nice comparison of the two here)

Though they work a little differently, they are both based on the same approach:

  • You set nodes with permission to assume (some) roles
  • You configure permission for pods by using annotations on the Kubernetes deployment yaml
  • The tool proxies and intercept calls to AWS EC2 metadata API and provides temporary credentials by assuming the role in the annotation

and presto – your pod only has the privileges it was configured with

To get this magic going you need to do 3 things:

  • set up permissions (roles and policies)
  • install kube2iam
  • annotate your pod deployments

set up permissions

The biggest problem for me was setting the permissions right. to do that you

1. add a new policy with sts:assume permission

1
2
3
4
5
6
7
8
9
10
11
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::your-account-id:role/prefix*"
        }
    ]
}

Note that you can (and should) limit the resources that services can assume by specifying a resource prefix

2. find what role is used by the worker nodes in your cluster and add to it the policy from step 1

3. for each role that you define and want pods to use you also need to add a trust relationship that allows the worker nodes’ role to assume it

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
   "Version": "2012-10-17",
   "Statement": [
     {
       "Effect": "Allow",
       "Principal": {
         "Service": "ec2.amazonaws.com"
       },
       "Action": "sts:AssumeRole"
     },
     {
       "Sid": "",
       "Effect": "Allow",
       "Principal": {
         "AWS": "arn:aws:iam::your-account-id:role/worker-node-role
       },
       "Action": "sts:AssumeRole"
     }
   ]
 }

Installing kube2iam

The kube2iam site has instructions on installing it –but I found  it was easier to install it using helm:

1
helm install stable/kube2iam --name dev-kube2iam --namespace kube-system -f ./kube2iam.config.yaml

where the config file is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
aws:
  region: "your-aws-region"
 
extraArgs:
  auto-discover-base-arn: true
  auto-discover-default-role: true
 
# Won't work with Calico
host:
  iptables: true
  interface: eni+
 
rbac:
  create: true

Setting roles for helm charts

Lastly, you need to annotate your deployments with  iam.amazonaws.com/role:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: Pod
metadata:
  name: aws-cli
  labels:
    name: aws-cli
  annotations:
    iam.amazonaws.com/role: role-arn
spec:
  containers:
  - image: fstab/aws-cli
    command:
      - "/home/aws/aws/env/bin/aws"
      - "s3"
      - "ls"
      - "some-bucket"
    name: aws-cli

note that the role-arn is the suffix for the arn since kube2iam is configured (or automatically picks up , as above) with the arn prefix

This works well if you’re deploying your services with kubectl – we are using helm though – and so to set the annotation with helm you need to set up the annotation in the pod template e.g.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
spec:
  replicas: {{ .Values.replicas }}
  template:
    metadata:
      labels:
        app: {{ .Chart.Name }}
      annotations:
        iam.amazonaws.com/role: {{ .Values.metadata.role }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}{{ .Values.image.branch }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.imagePullPolicy }}
          envFrom:
          - configMapRef:
              name: {{ .Chart.Name }}-configmap
          ports: [ containerPort: 8000 ]

If you got all the way to here, it means you’ve found it interesting – that’s probably a good time to mention that if you are interested working on similar (and more complex) problems – we are hiring for several positions (Devs, DevOps and QA automation). Offices in Tel-Aviv but if you’re good we’re also open for remote work.  Feel free to ping me for more details

Published on System Code Geeks with permission by Arnon Rotem Gal Oz, partner at our SCG program. See the original article here: Protecting your AWS from EKS with kube2iam

Opinions expressed by System Code Geeks contributors are their own.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button