I believe you will encountner scenarios in your daily work on AWS where you have to authenticate with different EKS clusters in different AWS accounts. Today, I will share how I structure my kubeconfig to allow me to authenticate with different EKS clusters in different AWS account.

Before we dive into it, AWS documentations actually have an article describing on the steps to authenticate with your cluster at https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html.

My preference in this scenario will be to setup my own kubeconfig file as it allows me more flexibility in configuring to my needs.

Installing & Configuring AWS CLI

First, we will start with configuring AWS CLI. Follow the instructions here to install the latest version of the CLI (https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

After which, create 2 files in your home directory.

~# mkdir ~/.aws
~# touch ~/.aws/config
~# touch ~/.aws/credentials

Prepare your AWS IAM access key id and secret access key. Input into this file:

[prod]
aws_access_key_id = <aws_secret_key_id>
aws_secret_access_key = <aws_secret_access_key>

[dev]
aws_access_key_id = <aws_secret_key_id>
aws_secret_access_key = <aws_secret_access_key>

Input the default region for easier usage into another file:

[prod]
region=ap-southeast-1

[dev]
region=ap-southeast-1

These 2 files will allow you to use AWS CLI for the respective account with the following environment variable to differentiate between the 2 environments.

export AWS_PROFILE=<prod/dev>

Installing & Configuring Kubectl

Next, we have to install kubectl to communicate with kubernetes API. First, we have to use the AWS CLI to get the corresponding kubernetes version.

~# aws eks describe-cluster --name <cluster name> --query "cluster.version" --output text
1.23

After which, we can download the kubectl with minor version (difference +-1) from your kubernetes version. In this case, we have 1.23 in our kubernetes cluster, the maximum minor version we can go will be 1.24.

curl -LO https://dl.k8s.io/release/v1.24.0/bin/linux/amd64/kubectl
Reference from Kubernetes Documentation (https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/): You must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.26 client can communicate with v1.25, v1.26, and v1.27 control planes. Using the latest compatible version of kubectl helps avoid unforeseen issues.

Next, we create the kubeconfig file.

~# mkdir ~/.kube
~# touch ~/.kube/config

Next, we get the respective data required to populate our kubeconfig file later.

~# aws eks describe-cluster --name <cluster name> --query "cluster.certificateAuthority.data" --output text
~# aws eks describe-cluster --name <cluster name> --query "cluster.endpoint" --output text

Fill in the content with these:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <certificate-authority-data>
    server: <cluster endpoint url>
  name: <any naming for this cluster>
- cluster:
    certificate-authority-data: <certificate-authority-data for 2nd cluster>
    server: <cluster endpoint url for 2nd cluster>
  name: <any naming for this 2nd cluster>
contexts:
- context:
    cluster: <any naming for this cluster>
    user: <user for cluster 1>
  name: <any naming for this cluster>
- context:
    cluster: <any naming for this 2nd cluster>
    user: <user for cluster 2>
  name: <any naming for this 2nd cluster>
current-context: <any context u want to use, not important now>
kind: Config
preferences: {}
users:
- name: <user for cluster 1>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - <your cluster region>
      - eks
      - get-token
      - --cluster-name
      - <your actual cluster name>
      command: aws
      env:
      - name: AWS_PROFILE
        value: <according to what you configured earlier in AWS CLI>
      interactiveMode: IfAvailable
      provideClusterInfo: false
- name: <user for cluster 2>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - <your cluster region>
      - eks
      - get-token
      - --cluster-name
      - <your actual cluster name>
      command: aws
      env:
      - name: AWS_PROFILE
        value: <according to what you configured earlier in AWS CLI>
      interactiveMode: IfAvailable
      provideClusterInfo: false

If you realised from the kubeconfig file, every context we switch to, it will automatically set an environment variable AWS_PROFILE to switch to the respective AWS account.

We can view all the available contexts with the following command:

~# kubectl config get-contexts

The above command will list all the available contexts and let you know which context you're currently on.

This setup will then allow us to switch context easily with the following command:

~# kubectl config use-context <context name>

Thats all! Try it today!