Kubernetes Cluster with Amazon EKS

What is Amazon EKS?

EKS is a managed service used to run Kubernetes on AWS without installing, operating, and maintaining its Kubernetes control plane or nodes. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.


These need to be install before setting up the cluster

  1. AWS CLI.
  2. Kubectl
  3. AWS-IAM-Authenticator.

Create role for EKS

Set up a new IAM role with EKS permissions.

Open the IAM console, select Roles on the left and then click the Create Role button at the top of the page.

From the list of AWS services, select EKS and then Next: Permissions at the bottom of the page.

Create role for EKS

Click on Next.

Create role for EKS 2

Enter a name for the role and click on create.

Create role for EKS enter name

Note down the Role ARN.

Creating a VPC  

Go to CloudFormation, and click the Create new stack button.

On the Select template page, enter the URL of the CloudFormation YAML:

Create Stack

Give the VPC a name, click Next.

 Click Next.


Click on Create button to create the VPC.

Note the various values created — SecurityGroups, VpcId and SubnetIds. You can see these under the Outputs tab of the



you can configure the private access point of the API server from outside of the VPC.

Ensure you have set to true the enableDnsHostnames and enableDnsSupport fields, otherwise routing to the API server won’t work.

Create EKS cluster

Use the AWS CLI to create the Kubernetes cluster. We will use the following command:

aws eks –region <region> create-cluster –name <clusterName>

–role-arn <EKS-role-ARN> –resources-vpc-config

SubnetIds=<subnet-id-1>, <subnet-id-2>, <subnet-id-3>, securityGroupIds=


This is an example of what this command will look like:

aws eks –region us-east-1 create-cluster –name demo –role-arn

arn:aws:iam::011173820421:role/eksServiceRole –resources-vpc-config


Subnet-03c954ee389d8f0fd, securityGroupIds=sg-0f45598b6f9aa110a

You should see the following output:


    “Cluster”: {

        “Status”: “CREATING”,

        “Name”: “demo”,

        “certificateAuthority”: {},

        “roleArn”: “arn:aws:iam::011173820421:role/eksServiceRole”,

        “resourcesVpcConfig”: {

            “subnetIds”: [





            “vpcId”: “vpc-0d6a3265e074a929b”,

            “securityGroupIds”: [




        “Version”: “1.11”,

        “arn”: “arn:aws:eks:us-east-1:011173820421:cluster/demo”,

        “platformVersion”: “eks.1”,

        “createdAt”: 1550401288.382



It takes few minutes to create it. Ping the status of the command:

aws eks –region us-east-1 describe-cluster –name demo –query




Open the Clusters page in the EKS:


Wait till it changes to “ACTIVE”, then update the kubeconfig file so kubectl can communicate with it.

Use the AWS CLI update-kubeconfig command (region and cluster name in your configurations):

Cmd: aws eks –region us-east-1 update-kubeconfig –name demo


Added new context arn: aws: eks: us-east-1:011173820421: cluster/demo to/Users/Daniel/.kube/config

Test it with kubectl get svc command:

kubectl get svc


kubernetes   ClusterIP   <none>        443/TCP   2m

Click the cluster in the EKS Console to review configurations:

This image has an empty alt attribute; its file name is Amazon7-ConsolePage-min-1024x620.png

Launch worker nodes

Use cloud formation template to launch

Got to CloudFormation, click Create Stack, use the template URL:


Click Next, enter the following details:

  1. ClusterName – name of your cluster
  2. ClusterControlPlaneSecurityGroup – security group used for creating the cluster.
  3. NodeGroupName – a name for your node group.
  4. NodeAutoScalingGroupMinSize – keep it default
  5. NodeAutoScalingGroupDesiredCapacity – keep it default.
  6. NodeAutoScalingGroupMaxSize – keep it default.
  7. NodeInstanceType – keep it default.
  8. NodeImageId – worker node AMI ID for the region
  9. KeyName – Amazon EC2 SSH key pair.
  10. BootstrapArguments – keep it blank.
  11. VpcId – enter the id of vpc we created
  12. Subnets – select the subnets you created.
This image has an empty alt attribute; its file name is Amazon9-SpecifyDetails-min-856x1024.png

On the Review page, select the check-box at the bottom click Create.

Worker nodes will get created

Open Outputs tab:


 Download the AWS authenticator configuration map:

Curl -O

Edit the file and edit the rolearn with NodeInstanceRole created:

apiVersion: v1

Kind: ConfigMap


  Name: aws-auth

  Namespace: kube-system


  mapRoles: |

    – rolearn: <ARN of instance role>

      username: system:node:{{EC2PrivateDNSName}}


        – system:bootstrappers

        – system:nodes

Save and apply the configuration:

kubectl apply -f aws-auth-cm.yaml


configmap/aws-auth created

 Status of your worker nodes:

kubectl get nodes –watch

kubectl get nodes –watchNAME                           STATUS ROLES AGE     

VERSIONip-192-168-245-194.ec2.internal    Ready <none> <invalid>   

v1.11.5ip-192-168-99-231.ec2.internal    Ready <none> <invalid>  

 v1.11.5ip-192-168-140-20.ec2.internal    Ready <none> <invalid>   


Cluster is ready and setup.

If you need help in Configuring this cluster, feel free to email us or call us at 810 214 2572