Setup Kubernetes cluster on AWS using terraform
Introduction
In this blog post, we will securely set up a managed Kubernetes cluster(EKS) on AWS using Terraform and deploy a Hello World web application. Throughout the process, we will cover some basic concepts.
Kubernetes concepts
I will try to cover the basic concepts in a more layman terms, that’s required for this tutorial.
Pods:
They are the smallest unit of computing. I'm pretty sure the previous statement sounds more sophisticated and complex to understand.
For now, think of Pod as just a Docker container that run on compute nodes. But to be precise they are a wrapper for containers. As of now, you don’t need to know why it needs to be a wrapper.
Control Plane components:
They are mainly responsible for scheduling the pods on different nodes.
Imagine they are like routers, responsible for routing the requests to the computing nodes. These components might run on a single node other than the compute nodes. Or they are co-located along with Node components that on the compute nodes for high availability.
Node components:
They are mainly responsible for running the Pods. These components run on every compute node.
Pre-requisites
Terraform: Follow this guide to install Terraform cli.
https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli.
Also, we use the terraform cloud in this example to manage the terraform state. You can read more about it here.
https://developer.hashicorp.com/terraform/cloud-docs/workspaces/state
AWS CLI: Follow this guide to install AWS cli
Kubectl: This is a kubernetes client cli. Follow the below guide.
https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
IAM permissions: Make the terraform cloud user has the required IAM permission to create EKS cluster and VPC.
Getting started
Create a working directory
mkdir -p kubernetes-hello-world && terraform initCreate variables.tf file.
Here we are just declaring two variables region and cluster_name. For this example, we are gonna use us-east-1 as region and test as region name.
variable "region" { description = "AWS region" type = string default = "us-east-1" } variable "cluster_name" { type = string default = "test" description = "Cluster name" }Create a provider.tf file and add the code in the below code block.
We need to add the AWS provider for Terraform. For this example, I have set up a Terraform cloud account and created an organization name test and a project name test within the cloud account.# provider.tf terraform { required_providers { aws = { source = "hashicorp/aws" version = "5.29.0" } } cloud { organization = "test" workspaces { project = "test" tags = ["test"] } } } provider "aws" { region = "us-east-1" }Create vpc.tf file and add the code in the below code block.
To create a cluster, we first need to create a VPC, and the subnets must be tagged with relevant tags. In this example, we create three public and private subnets across three zones.
# vpc.tf data "aws_availability_zones" "available" { filter { name = "opt-in-status" values = ["opt-in-not-required"] } } module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "5.0.0" name = "eks-vpc" cidr = "10.0.0.0/16" azs = slice(data.aws_availability_zones.available.names, 0, 3) private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"] public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"] enable_nat_gateway = true single_nat_gateway = true enable_dns_hostnames = true # Below tags are important. public_subnet_tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" "kubernetes.io/role/elb" = 1 } private_subnet_tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" "kubernetes.io/role/internal-elb" = 1 } }Create main.tf and add the code in the below code block.
Here, we are creating an EKS cluster with one node group inside the VPC mentioned in the previous step. That has one node of type t2.medium.
# main.tf module "eks" { source = "terraform-aws-modules/eks/aws" version = "~> 20.0" cluster_name = var.cluster_name cluster_version = "1.29" subnet_ids = module.vpc.private_subnets cluster_endpoint_public_access = true enable_cluster_creator_admin_permissions = true vpc_id = module.vpc.vpc_id cluster_addons = { coredns = { most_recent = true } kube-proxy = { most_recent = true } vpc-cni = { most_recent = true } } eks_managed_node_groups = { first = { desired_capacity = 1 max_capacity = 1 min_capacity = 1 instance_type = "t2.medium" } } }
Create outputs.tf file with the below contents.
output "cluster_endpoint" { description = "Endpoint for EKS control plane" value = module.eks.cluster_endpoint } output "cluster_name" { description = "Kubernetes Cluster Name" value = module.eks.cluster_id }
Plan and apply the changes.
By doing terraform apply, the VPC and EKS clusters mentioned in the previous steps are created.
terraform plan terraform apply
If the above commands run fine, then you should be able to see the cluster. And the node groups in it.
aws eks list-clusters aws eks list-nodegroups
To access the cluster, you need to set up the Kubernetes credentials on your system so that Kubectl can authenticate while communicating with the Kubernetes cluster.
To do this, you need to update or create the kubeconfig file.aws eks update-kubeconfig --region us-east-1 --name test
If the above command has worked, the below command should run successfully.
kubectl get nodesCreate a pod file by the name hello-world-pod.yaml.
By deploying this pod, you will deploy a container, which is a web application that runs on port no 8081.
apiVersion: v1 kind: Pod metadata: name: helloworld spec: containers: - name: helloworld image: testcontainers/helloworld ports: - containerPort: 8081Deploy the pod into the cluster.
kubectl apply -f hello-world-pod.yamlCheck if the pod is running.
kubectl get podsYou must do port forwarding to access the web application on your local system.
kubectl port-forward hello-world 8081:8081Finally, navigate to http://localhost:8081

