Introduction
In this blog post, we will walk through a basic Kubernetes user auth setup that integrates with AWS IAM. It ties the user’s Kubernetes access to his IAM user account. The AWS IAM Authenticator for Kubernetes runs on the Amazon EKS control plane by default.
Prerequisites
Kubernetes on AWS - From Zero To ProductionKubernetes on AWS - EKS Setup with Terraformgit clone git@github.com:canida-software/k8s-on-aws.git
Add User
The authenticator retrieves its configuration from the
aws-auth
configmap in the kube-system
namespace. The cluster creator’s AWS IAM user has access to the cluster by default without any entry in the configmap. However, we need to grant explicit access to additional users by linking their IAM user to groups in the cluster (AWS Docs). First we extract the existing aws-auth configmap from the cluster and store it in a yaml file because it contains account specific information that we want to keep (it contains an ARN to a role that is attached to all your nodes).
cd k8s-on-aws/authorization
kubectl get configmap aws-auth -n kube-system -o yaml > aws-auth.yaml
Afterwards remove anything thats tied to the instantiation of the configmap: creationTimestamp, resourceVersion, selfLink, uid.
Then, we can add a new user by copying his IAM user ARN and tying it to a group.
system:masters
is a group which is hardcoded into the Kubernetes API server source code as having unrestricted rights to the Kubernetes API server. Therefore, don’t just blindly assign it to users. The full aws-auth.yaml
should look similar to this.apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::054000737513:role/general-eks-node-group-20220715203821624700000002
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::054000737513:user/nico
username: nico
groups:
- system:masters
If you apply the
aws-auth
configmap from the template repository and overwrite the mapRoles
section by mistake your nodes won’t be able to talk to the AWS control plane anymore. You need to manually find the general-eks-node-group..
role in IAM, fix the error and apply the correct configmap by hand.Manage Users via git
We could just apply the above configmap using
kubectl apply -f aws-auth.yaml
. However, we want to manage it with Argo CD in a GitOps fashion. First, push your modified aws-auth.yaml to your repository. Then, head to your Argo CD web dashboard and create a new application. Point the application’s repository source path to
authorization
.After creating the application it should immediately get synced and the status should flip to healthy & synced:
Notice that I selected self-heal while creating the application. When someone updates the aws-auth configmap outside of the git repository this overwrites the changes and forces the
aws-auth
configmap back into the state thats stored in git. We want to enforce that git is the only way to (permanently) add new users and noone can bypass this.