Introduction
In this blog post, we will walk through a basic Terraform Setup to run Kubernetes on AWS. It will build on top of the managed Kubernetes offering Amazon EKS so that we do not have to bother managing Kubernetes’ control plane. The Kubernetes nodes are created in our VPC’s private subnets and access the internet via NAT Gateway’s in the public subnets as illustrated below.
Prerequisites
Kubernetes on AWS - From Zero To Production- AWS CLI V2 with admin credentials.
- A VPC with public and private subnets. See below blog post for a setup guide.
- terraform
- kubectl
git clone git@github.com:canida-software/k8s-on-aws.git
If you plan to use Kubernetes and Terraform on a regular bases the following aliases might also come in handy:
alias k="kubectl”
in.bashrc
or.zshrc
alias kn="kubectl config set-context --current --namespace”
alias tf="terraform”
in.bashrc
or.zshrc
Terraform Remote State
First, we will set up a Terraform backend. Backends determine where Terraform stores its state. Terraform uses this persisted state data to keep track of the resources it manages. We won’t use the local
.tfstate
file because it contains sensitive data e.g. database secrets. Instead, we will use the s3 backend to store the state on s3. Create a bucket such as canida-terraform
and enable bucket versioning to be able to restore old state.aws s3api create-bucket --bucket canida-terraform --region eu-central-1 --create-bucket-configuration LocationConstraint=eu-central-1
aws s3api put-bucket-versioning --bucket canida-terraform --versioning-configuration Status=Enabled
Then, adapt the backend configuration in
backend.t
f
and substitute your bucket name. You can also freely change the file name for the state or the region. terraform {
backend "s3" {
bucket = "canida-terraform"
key = "k8s-main-eks.tfstate"
region = "eu-central-1"
}
}
Cluster Setup
The next step is to set up the cluster. Please rename
canida.tfvars
and adapt it to your needs.# canida.tfvars
aws_region = "eu-central-1"
cluster_name = "k8s-main"
kubernetes_version = "1.22"
vpc_id = "vpc-XXX"
private_subnets = ["subnet-XXX", "subnet-XXY", "subnet-XXZ"]
default_tags = {
owner = "canida"
project = "k8s-main"
}
eks_managed_node_groups = {
general = {
min_size = 3
max_size = 10
# due to the widespread use of autoscaling this property is ignored after initial deployment.
desired_size = 3
instance_types = ["t3a.medium"]
capacity_type = "SPOT"
}
}
The least you have to do is to change the variables
vpc_id
and private_subnets
. git clone git@github.com:canida-software/k8s-on-aws.git
cd ./k8s-on-aws/eks
# install Terraform modules
terraform init
# setup the cluster and configure it using the tfvars file
terraform apply -var-file canida.tfvars
# configure your kubectl to work with Terraform
aws eks update-kubeconfig --region $(terraform output -raw aws_region) --name $(terraform output -raw cluster_id)
# check the selected Kubernetes context
kubectl config get-contexts
# verify cluster access
kubectl check nodes
Teardown
You want to fully tear down your cluster? Just remember to provide your variables for deleting the cluster as well.
tf destroy -var-file canida.tfvars