Back to resource

Simplifying AWS EKS Deployment with

13 Feb 2024
Author: Alexander Obregon, Software Engineer, fervent coder & writer
Views: 517


When it comes to container orchestration, setting up a managed Kubernetes service like Amazon’s EKS (Elastic Kubernetes Service) requires solid and experienced understanding of AWS services and Kubernetes configurations. For many developers, particularly those without extensive Ops experience, this complexity can present a steep learning curve. is an open-source cloud infrastructure management tool that greatly simplifies this process, allowing users to deploy and manage Kubernetes clusters with minimal overhead. is designed to make the creation of cloud infrastructure simple, using a declarative approach. It features a strong templating system that automates the provisioning of resources, including networks, IAM roles, and Kubernetes clusters, thus taking on the majority of the burden of these complexities. This allows developers to focus on the core functionalities of their applications rather than the hassle and time commitment that cloud service configurations require.

The tool integrates flawlessly with the existing ecosystem by leveraging well-established technologies such as Terraform, Helm, and kubectl internally. This integration means that while abstracts much of the complexity, it still retains the flexibility and strength of these technologies. The goal of this guide is to walk you through the process of using to set up an EKS cluster with the AWS-EKS stack template sample provided, focusing on how facilitates this otherwise complex task.

By following the steps in this guide, you’ll successfully establish the following components within your AWS cloud infrastructure:

  • EKS Cluster: This forms the core of your Kubernetes environment hosted on AWS.
  • cert-manager: An add-on for the EKS cluster that automates the management and issuance of TLS certificates.
  • ingress-nginx: An ingress controller for Kubernetes that manages external access to HTTP and HTTPS routes.
  • external-dns: This component links Kubernetes Services and Ingresses with DNS providers to manage DNS records dynamically.
  • argocd: A Kubernetes add-on that serves as a declarative GitOps continuous delivery tool.
  • AWS IAM Roles for EKS: Specifically, roles for cert-manager and external-dns that utilize IAM roles for service accounts (IRSA) to provide the required AWS permissions.
  • Route 53 Zone (optional): A hosted zone under the domain for DNS management, created only if you opt to use the domain.
  • VPC for EKS Cluster (optional): A new VPC is created if no vpc_id is specified, ensuring your EKS cluster has a secure and isolated network environment.


Before getting started and deploying an EKS cluster with, it is important to have a well-prepared environment. Here are the required prerequisites before going forward:

AWS Account

First and foremost, an active AWS account is required. Since will be orchestrating resources within AWS, you need to have the ability to create and manage AWS services. If you haven’t already, you should sign up for an AWS account on the Amazon Web Services website.

Terraform Installation uses Terraform under the hood for provisioning cloud resources. Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. You will need to have Terraform version 1.4 or higher installed, which is required for compatibility with the templates provided by requires the Terraform binary; you can download Terraform here.

AWS CLI Installation

The AWS Command Line Interface (CLI) is a tool that provides you with the ability to interact with AWS services. With AWS CLI, you can easily control multiple AWS services directly from the command line and automate them through scripts.

Installation on Windows:

For Windows users, use this link to install or update to the latest version of the AWS CLI. To confirm the installation, open the Command Prompt and enter aws--version. You should see the AWS CLI version displayed, confirming that it’s installed correctly.

Installation on macOS:

Mac users can also install the AWS CLI using the bundled installer. Here’s how to do it:

  • Download the AWS CLI bundled installer using the curl command.
curl "" -o "AWSCLIV2.pkg"
  • Run the downloaded package with the following command and follow the prompts.
sudo installer -pkg ./AWSCLIV2.pkg -target /
  • Once installed, you can verify the installation by typing aws--version in your terminal, which will return the version number of the AWS CLI.

kubectl Installation

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You will need kubectl to interact with your new EKS cluster, deploy applications, inspect and manage cluster resources, and view logs.

Install kubectl binary with curl on Windows:

Use the curl command to download the latest kubectlrelease with the following command line:

curl.exe -LO ""

Install kubectl binary with curl on macOS:

Use the curl command to download the latest kubectl release with the following command line:

curl -LO "$(curl -L -s" Client Installation

The client is a command-line interface that allows you to interact with services, enabling you to create, manage, and scale cloud infrastructure easily. The client provides a seamless experience for deploying infrastructure as code, integrating with cloud providers like AWS.

Installation on Windows and macOS:

To install the client on your system, you can use a simple installer script that will download the latest version of the client and install it locally.

  • Fetch the installation script and execute it with the following command:
curl -fsSL | sh

This script will detect your operating system and download the correct version of the client. It will then perform the installation and give instructions on how to proceed.

Having all the prerequisites in place is the first step towards a successful EKS deployment with These initial steps are designed to equip you with the necessary tools and access, setting the stage for a hassle-free setup. With these components ready, you’re now prepared to go deeper into the workflow and begin crafting your Kubernetes infrastructure on AWS.

Configuring AWS Access for

To manage resources effectively on AWS using, you need to configure access credentials. uses these credentials to interact with AWS services, create resources, and manage the infrastructure on your behalf. There are two primary methods to configure AWS access for using environment variables or the AWS shared credentials file.

Using Environment Variables

Environment variables are a common way to provide AWS credentials to applications running on your system. They are easy to set up and can be used in many different development environments. To configure your AWS access using environment variables, you need to set the following:

    1. AWS_ACCESS_KEY_ID: Your AWS access key.
    2. AWS_SECRET_ACCESS_KEY_ID: Your AWS secret key.
    3. AWS_DEFAULT_REGION: The default AWS region where your resources will be created.

For example, you can open a terminal window and enter the following commands:

export AWS_ACCESS_KEY_ID="your_access_key_here"
export AWS_SECRET_ACCESS_KEY="your_secret_key_here"
export AWS_DEFAULT_REGION="desired_aws_region"

Make sure to replace  your_access_key_here, your_secret_key_here and desired_aws_region with your actual AWS access key, secret key, and the AWS region you intend to use, respectively.

Using the AWS Shared Credentials File

The AWS shared credentials file is a standardized way of managing credentials for various AWS tools and SDKs. It’s default location is ~/.aws/credentials on Linux and macOS, or C:\Users\USERNAME\.aws\credentials on Windows. This file allows you to define multiple profiles with different sets of credentials. can use these profiles to authenticate with AWS.

Here is an example of what the AWS shared credentials file might look like:

aws_access_key_id = your_access_key_here
aws_secret_access_key = your_secret_key_here

And the corresponding AWS shared configuration file located at ~/.aws/config could be:

[profile cluster-dev]
region = desired_aws_region

After configuring the credentials file, you must inform the AWS CLI and other AWS SDKs to use the profile by setting the  AWS_PROFILE environment variable:

export AWS_PROFILE=cluster-dev

It’s important to note that the credentials must have the necessary permissions to manage AWS resources. requires an IAM user with administrative permissions to provision resources effectively.

With the AWS access properly configured, you’ve laid a secure foundation for to interact with AWS services. This access is an important step, enabling to deploy and manage the EKS cluster and its resources efficiently. As you proceed with the guide, remember that these credentials are the bridge between and AWS, establishing smooth communication and operations.

Creating an S3 Bucket for States

For to operate effectively, it requires a dedicated S3 bucket to store the state of your infrastructure. This state is a critical component, as it tracks the configuration and status of all the resources managed by in your AWS environment. Here’s how you can create an S3 bucket for this purpose:

Why S3 for State Management?

Amazon S3 provides a reliable and secure solution for storage that can utilize to keep state files. These files are used by Terraform to map real-world resources to your configuration, track metadata, and improve performance for large infrastructures.

Creating the S3 Bucket

To create an S3 bucket, use the AWS CLI tool with the mb (make bucket) command. This is a simple yet powerful command that provisions a new S3 bucket in your specified region. The command format is:

aws s3 mb s3://<bucket-name>

Replace <bucket-name> with your unique name for your S3 bucket. It’s required that the bucket name you choose be globally unique and not currently in use by another AWS user.

Here’s an example command to create a bucket named cdev-states:

aws s3 mb s3://cdev-states

When running this command, you should receive a message indicating that the bucket has been created successfully.

Setting Bucket Policies (Optional)

After creating your bucket, you can also configure additional settings, such as bucket policies, to define permissions and secure access to your S3 bucket. This can be done through the AWS Management Console or by using the AWS CLI to apply a policy JSON file.

Versioning (Optional)

Enabling versioning on your S3 bucket is recommended as it keeps an immutable version of state files, which adds another layer of protection for your state data. To enable versioning, you can use the following command:

aws s3api put-bucket-versioning --bucket cdev-states --versioning-configuration Status=Enabled

This command sets the versioning status to Enabled for the cdev-states bucket, following best practices for state management.

By following these steps, you have successfully created an S3 bucket for to store its state. This bucket is a foundational element that supports the deployment and management of your cloud resources with, providing you with a strong and secure infrastructure as code experience.

DNS Configuration for the EKS Cluster

The Domain Name System (DNS) plays an important role in how users access your EKS cluster. It translates user-friendly domain names into IP addresses that computers use to identify each other on the network. For an EKS cluster, AWS Route 53 service acts as the scalable DNS to route user requests to the cluster’s endpoints.

Understanding Route 53 Hosted Zones

Hosted zones in AWS Route 53 are containers that hold information about how you want to route traffic for a domain (such as and its subdomains (like For the AWS-EKS stack template provided by, you’ll need to define a Route 53 hosted zone for your cluster.

Options for Setting Up a Hosted Zone

  • Existing Hosted Zones: If you already have a hosted zone in Route 53, you can use it to route traffic to your EKS cluster. This is a good option if you’re expanding or updating existing infrastructure.
  • Creating a New Hosted Zone: If you don’t have a hosted zone or need a new one, you can create one through the AWS Management Console or using the AWS CLI. Here’s how to create a new hosted zone using the CLI:
aws route53 create-hosted-zone --name --caller-reference MyHostedZone --hosted-zone-config Comment="My hosted zone for EKS."

Replace with your domain name and adjust the caller-reference and Comment to suit your needs.

  • Using “” Domain: offers a streamlined way to delegate a subdomain under the “” domain. This can be a practical choice for development and staging environments, or if you don’t own a domain.

Verifying DNS Configuration

Once you’ve set up or identified your hosted zone, ensure that it’s configured correctly to point to the EKS cluster. Use the AWS Route 53 console or CLI to check the DNS records. The correct configuration will resolve your domain to the EKS cluster endpoints.

By properly configuring the DNS with AWS Route 53, you ensure reliable and accessible connections to your EKS cluster. leverages this configuration to provide a seamless infrastructure deployment and management experience, making it straightforward to link your domain to the cloud resources.

Creating the EKS Cluster Project with

The creation of an EKS cluster on AWS is greatly facilitated by’s automated processes, which involve setting up a project that acts as the blueprint for your infrastructure. Here is a detailed explanation of how to create an EKS cluster project using

Setting Up the Project Directory

Start by creating a directory locally, where your project’s configuration files will be located. This directory will serve as the workspace for your project. Navigate to your chosen directory in the terminal before proceeding with the next steps.

Initializing the Project with simplifies the project initialization by providing ready-to-use templates. To create a new EKS project, execute the following command within your project directory:

cdev project create

This command fetches the AWS-EKS stack template from the repository and sets up your project structure.

Customizing the Configuration Files

After initializing the project, you will have several YAML configuration files in your directory, including project.yaml, backend.yaml, and stack.yaml. These files are critical as they define your project’s global variables, backend configuration for state storage, and stack configuration respectively.

  • project.yaml: This file contains global project variables such as the organization name, region, and state bucket name. It represents the overall settings of your project.
  • backend.yaml: It configures the backend for and Terraform states. If you’re using an S3 bucket for state storage, this file will include the bucket name and additional settings.
  • stack.yaml: This file tells which template to use and what variables to apply. It’s where you set your environment-specific variables and configurations.

Review and edit these files to match your project’s requirements. For example, in stack.yaml, you might define the desired instance types, the number of nodes, and other EKS-specific settings.

Applying Variables and Exports

Ensure that you replace placeholder values in your configuration files with actual values that pertain to your AWS setup. This includes setting the correct project name, AWS region, and any other variables that are relevant to your EKS cluster.

  organization: shalb
  region: eu-central-1
  state_bucket_name: cdev-states

Committing the Project Files

Once you’ve customized the configuration files, commit them to your version control system. This practice is essential for collaboration and maintaining a history of your infrastructure’s state.

With, creating an EKS cluster project is a streamlined process that involves initializing a project from a template, customizing configurations, and applying the necessary variables. By following these steps, you set the stage for a successful EKS deployment on AWS, utilizing’s powerful automation capabilities.

Executing Commands to Launch the EKS Cluster

After setting up your project directory and customizing the necessary configuration files, the next step is to execute specific commands that will build and launch your EKS cluster on AWS.

Building the Project with cdev plan

The cdev plan command is used to preview the actions that will perform. This command does not make any changes to your actual infrastructure; instead, it provides an output detailing what will be created, updated, or destroyed. This step is crucial for verifying that your configurations are set up correctly before proceeding with the actual deployment.

In your terminal, within your project directory, run:

cdev plan

Review the output carefully to ensure all planned actions align with your infrastructure goals.

Applying the Configuration with cdev apply

Before running cdev apply, it’s important to double-check your stack.yaml file for any placeholders or commented fields. Ensure that all values, including VPC and subnet IDs if using existing ones, are correctly set.

When you’re ready to deploy, execute:

cdev apply

For a more detailed output, especially useful for troubleshooting or getting more insight into the deployment process, run the command in debug mode:

cdev apply -l debug

Post-Deployment Steps

Upon the successful execution of cdev apply, will output important information, such as the ArgoCD URL for your cluster. This URL is essential for accessing the ArgoCD dashboard, which provides a graphical interface to manage Kubernetes applications.

To access the ArgoCD UI, use the “admin” login along with the bcrypted password specified in your stack.yaml. This is also a good time to verify that all resources defined in the stack template have been deployed correctly.

Additionally, you will receive a command to retrieve the kubeconfig file. This file is necessary for interacting with your Kubernetes cluster using kubectl. Follow the given instructions to set up your kubeconfig.

Cleaning Up Resources with cdev destroy

If at any point you need to tear down the cluster and all associated resources, provides a straightforward command:

cdev destroy

This command should be used with caution, as it will remove all infrastructure components associated with the project.

The process of launching an EKS cluster on AWS with involves careful planning and verification at each step. By executing the cdev plan and cdev apply commands, you transition from planning to actual deployment, culminating in a fully functional EKS cluster. Always remember to use the debug mode for detailed logging and to confirm all actions before applying changes to your live environment.


In wrapping up this guide on deploying an EKS cluster on AWS using, it’s clear that the path to a strong, scalable Kubernetes environment doesn’t have to be fraught with complexity. stands out for its simplification in the often turbulent seas of cloud infrastructure management, providing a streamlined, user-friendly approach to what traditionally has been a complex process.

From setting up essential prerequisites to executing finely tuned commands that breathe life into your infrastructure, every step has been meticulously designed to minimize friction and enhance productivity. The creation of an S3 bucket for state management, the configuration of DNS with AWS Route 53, and the initialization of a project have been transformed from potentially cumbersome tasks into straightforward steps, thanks to the abstraction and automation offers.

This guide has demonstrated that, regardless of your expertise level in Kubernetes or AWS, equips you with the tools and processes to deploy an EKS cluster with confidence and clarity. By abstracting the underlying complexities and exposing a clear, concise interface, enables developers and DevOps professionals to focus more on their core applications and less time on the intricacies of cloud infrastructure.

Back to resource