Kubernetes on AWS - Essential Applications to Integrate with AWS

Introduction

In this blog post, we will tackle the topics secrets management, dns provisioning and load balancing. The combination of these topics is combined in a single post because I group the corresponding tools and install them in a single step after setting up the cluster. We will install
  • external-dns to automatically provision DNS entries on Route53,
  • the AWS Load Balancer Controller to create an Application Load Balancer in AWS and ingest traffic to our services,
  • and the external-secrets operator to create Kubernetes secrets from the AWS Secretsmanager.
In the app of apps pattern, we create an Application, e.g. see “applications” in the illustration, from the web dashboard. The application points to a path in a git repository which itself contains multiple Application yaml files pointing to other paths in a git repository, e.g. “guestbook”, “helm-dependency” and the other 2 apps.
notion image

Prerequisites

Kubernetes on AWS - From Zero To Production
Kubernetes on AWS - Continuous Deployment with Argo CD
Kubernetes on AWS - IAM Roles for Service Accounts via Terraform
  • git clone git@github.com:canida-software/k8s-on-aws.git
  • Route53 hosted zone for your domain to manage its DNS

Adapt Configuration

Before, we will deploy any tool please go through all the files in k8s-on-aws/applications/tools , explore them and adapt them to your setup if necessary. I.e.
argocd-apps/aws-load-balancer-controller.yaml → Modify the service account role ARN to use the role that you created via Terraform in the previous blog post.
argocd-apps/external-dns.yaml → Modify the service account role ARN to match the role that you created previously.
argocd-apps/external-secrets.yaml → Modify spec.source.repoUrl to match your git repository.

TLS Certificate

If you have not already done so for your domain, please create a TLS certificate in the AWS Certificate Manager. I created a wildcard certificate for *.canida.io . The certificate, will automatically be picked up and used by the AWS Load Balancer controller to enable https access to your services.
💡
The wildcard certificate does not work for subsubdomain.subdomain.canida.io. This would require a wildcard certificate for *.subdomain.canida.io.

Setup

We will deploy all the tools at once using the app of apps pattern. Open the web dashboard and create an application that deploys applications/tools/argocd-apps . The argocd-apps folder contains an Argo CD Application for each tool that we want to deploy.
notion image
notion image
After you deployed the tools, you should see the following applications in your dashboard:
notion image
 
Additionally, we can wait for a few minutes and visit cd.canida.io to verify that the Argo CD dashboard shows up. If it doesn’t check whether the DNS entry was created and then check the load balancer controller logs.

External DNS

Documentation: https://github.com/kubernetes-sigs/external-dns The first tool that we installed was external-dns. It’s a tool that integrated with DNS providers such as Route53 and creates DNS entries from our ingress resources. Check out the logs of the external dns pod. It should create a DNS entry cd.canida.io for ArgoCD based on the ingress that we deployed with it. The corresponding log lines look as follows:
msg="All records are already up to date"
msg="Applying provider record filter for domains: [XXX.de. .XXX.de. canida.io. .canida.io.]"
msg="Desired change: CREATE cd.canida.io A [Id: /hostedzone/XXX]"
msg="Desired change: CREATE cd.canida.io TXT [Id: /hostedzone/XXX]"
msg="Desired change: CREATE cname-cd.canida.io TXT [Id: /hostedzone/XXX]"
msg="3 record(s) in zone canida.io. [Id: /hostedzone/XXX] were successfully updated"
 
You can check out the applications/tools/argocd-apps/external-dns.yaml to see the Helm parameters that we provided to external-dns. Note that the application does not refer to another kustomize directory. Instead it refers to a Helm chart.

AWS Load Balancer Controller

The AWS Load Balancer Controller watches the ingress resources in your cluster and spawns corresponding load balancers in AWS to route traffic into your cluster. It is configured via annotations. Read more about the annotations here: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/.
The below ingress configures an internet-facing load balancer to route traffic from port 80/443 to the argocd server at port 443. We configured alb.ingress.kubernetes.io/backend-protocol: HTTPS because the argocd-server service only listens to TLS encrypted communication on port 443. The annotation alb.ingress.kubernetes.io/group.name: main specifies a group name for our ALB. Whenever, we specify an Ingress resource anywhere else and use the same group.name it will use the same ALB. Thats useful because an ALB costs
  • $0.027 per Application Load Balancer-hour (or partial hour)
  • $0.008 per Load Balancer Capacity Unit-hour (or partial hour)
and the default behavior is to create a single ALB per workload.
internet-facing load balancers are spawned into public subnets while internal load balancers are spawned into private subnets. For the subnet autodiscovery to work properly the subnets must be tagged as follows: Subnet Discovery Docs. If you followed our VPC setup guide the tags were added automatically. O.w. you need to update the subnet tags manually.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argocd
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    alb.ingress.kubernetes.io/group.name: main
    alb.ingress.kubernetes.io/healthcheck-path: /
    # o.w. argocd container tries to redirect to https leading to a redirect loop
    alb.ingress.kubernetes.io/backend-protocol: HTTPS
  labels:
    app: argocd
spec:
  rules:
  - host: cd.canida.io
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: ssl-redirect
            port:
              name: use-annotation
      - path: /
        pathType: Prefix
        backend:
          service:
            name: argocd-server
            port:
              number: 443

External-Secrets Operator

The external-secrets operator integrates with several secret stores. We will configure it using the configuration files in applications/external-secrets-config to fetch secrets from the AWS Secretsmanager. The external-secrets operator will create corresponding Kubernetes secrets that we can use in our pods.
We will deploy a ClusterSecretStore resource which is used to configure the external-secrets operator. It looks as follows and references the aws-secretsmanager service account created in the same kustomize directory.
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
  name: aws-secretsmanager
spec:
  provider:
    aws:
      service: SecretsManager
      region: eu-central-1
      auth:
        jwt:
          serviceAccountRef:
            name: aws-secretsmanager
            namespace: external-secrets
 
The service account is configured with access to our secrets via the corresponding IAM role:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: aws-secretsmanager
  namespace: external-secrets
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::054000737513:role/k8s-main/ExternalSecrets
 
After the above is configured we can create an ExternalSecret:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: sample-secret
spec:
  refreshInterval: 1m
  secretStoreRef:
    name: aws-secretsmanager
    kind: ClusterSecretStore
  target:
    creationPolicy: Owner
  data:
  - secretKey: MY_SECRET_PASSWORD
    remoteRef:
      key: k8s-main/sample-secret
      property: MY_SECRETSMANAGER_PASSWORD
The resulting secret will be named sample-secret and contain a key MY_SECRET_PASSWORD which contains the password from the key MY_SECRETSMANAGER_PASSWORD in the AWS Secretsmanager secret k8s-main/sample-secret .
 
The setup described above, will be deployed as a separate application from applications/external-secrets-config:
notion image
 
The application will create the ClusterSecretsStore and the corresponding service account to access the AWS Secretsmanager. Afterwards, creating new secrets is really smooth.
  1. Create a secret in AWS Secretsmanager.
  1. Create an ExternalSecret instead of a Secret.
  1. The external-secrets operator creates and updates the Secret.
Nico Duldhardt

Written by

Nico Duldhardt

https://www.linkedin.com/in/nico-duldhardt/