diff --git a/Assets/ansible-master-installation.png b/Assets/ansible-master-installation.png
new file mode 100644
index 000000000..698adca35
Binary files /dev/null and b/Assets/ansible-master-installation.png differ
diff --git a/Assets/ansible-worker-installation.png b/Assets/ansible-worker-installation.png
new file mode 100644
index 000000000..0dcd61268
Binary files /dev/null and b/Assets/ansible-worker-installation.png differ
diff --git a/Assets/terraform-provisioning.png b/Assets/terraform-provisioning.png
new file mode 100644
index 000000000..cec73a508
Binary files /dev/null and b/Assets/terraform-provisioning.png differ
diff --git a/README.md b/README.md
index 5a83b834c..b51808632 100644
--- a/README.md
+++ b/README.md
@@ -24,6 +24,8 @@ WanderLust is a simple MERN travel blog website ✈ This project is aimed to hel
- ArgoCD (CD)
- Redis (Caching)
- AWS EKS (Kubernetes)
+- Terraform (Infrastructure Provisioning)
+- Ansible (Configuration Management)
- Helm (Monitoring using grafana and prometheus)
### How pipeline will look after deployment:
@@ -42,12 +44,15 @@ WanderLust is a simple MERN travel blog website ✈ This project is aimed to hel
| Tech stack | Installation |
| -------- | ------- |
-| Jenkins Master | Install and configure Jenkins |
-| eksctl | Install eksctl |
+| Jenkins Master | Create a Jenkins Master EC2 instance |
+| Jenkins-Worker | Create a Jenkins-Worker EC2 instance |
+| Bastion-Host | Create a Bastion Host to provision Clusters using terraform and Configure master and worker nodes using ansible
+| Terraform | Provision EKS Cluster with Terraform |
+| Ansible | Perform Configuration Management on Jenkins Master and Jenkins Worker |
+| Jenkins Worker Setup| Setup Jenkins Worker as a node to run jobs in Jenkins Master. |
| Argocd | Install and configure ArgoCD |
-| Jenkins-Worker Setup | Install and configure Jenkins Worker Node |
| OWASP setup | Install and configure OWASP |
-| SonarQube | Install and configure SonarQube |
+| SonarQube | Configure SonarQube |
| Email Notification Setup | Email notification setup |
| Monitoring | Prometheus and grafana setup using helm charts
| Clean Up | Clean up |
@@ -60,128 +65,156 @@ WanderLust is a simple MERN travel blog website ✈ This project is aimed to hel
sudo su
```
> [!Note]
-> This project will be implemented on North California region (us-west-1).
+> This project will be implemented on Mumbai region (ap-south-1).
-- Create 1 Master machine on AWS with 2CPU, 8GB of RAM (t2.large) and 29 GB of storage and install Docker on it.
+This instance primary use is to launch the EKS cluster using Terraform and
+
+- Create 1 Master machine on AWS with 2CPU, 8GB of RAM (t2.large) and 29 GB of storage.
#
- Open the below ports in security group of master machine and also attach same security group to Jenkins worker node (We will create worker node shortly)

> [!Note]
-> We are creating this master machine because we will configure Jenkins master, eksctl, EKS cluster creation from here.
+> We are creating this master machine because we will configure Jenkins master on this machine.
+
+- Create 1 Jenkins Worker Instance on AWS with 2CPU, 8GB of RAM (t2.large) and 29 GB of storage.
+
+> [!Note]
+> We are creating this worker machine because we will run Jenkins jobs on this machine.
+
+After creating these 2 ec2 instances , we will configure Jenkins master and Jenkins worker using Ansible, and Provision EKS cluster on AWS using Terraform.
+
+- Create 1 Bastion Machine on AWS with 2CPU, 2GB of RAM (t2.small) and 8GB of storage.
+
+> [!Note]
+> We are creating this bastion machine because we will use this machine to provision EKS cluster using terraform and configure master and worker nodes using ansible.
-Install & Configure Docker by using below command, "NewGrp docker" will refresh the group config hence no need to restart the EC2 machine.
+Install Terraform, Ansible, and AWS CLI in the bastion machine
+1. Install AWS CLI
```bash
-apt-get update
+#Installing AWS CLI
+sudo apt update
+curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
+sudo apt install unzip
+unzip awscliv2.zip
+sudo ./aws/install
```
+
+- Configure AWS CLI with Access key
```bash
-apt-get install docker.io -y
-usermod -aG docker ubuntu && newgrp docker
+aws configure
```
-#
-- Install and configure Jenkins (Master machine)
+- Input the aws credentials of the IAM user
+Access Key
+Secret Key
+Set the Region
+
+
+2. Install Terraform
```bash
-sudo apt update -y
-sudo apt install fontconfig openjdk-17-jre -y
+sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
+wget -O- https://apt.releases.hashicorp.com/gpg | \
+gpg --dearmor | \
+sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
+gpg --no-default-keyring \
+--keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
+--fingerprint
+echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
+https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
+sudo tee /etc/apt/sources.list.d/hashicorp.list
+sudo apt update
+sudo apt-get install terraform -y
+```
-sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
- https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
-
-echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \
- https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
- /etc/apt/sources.list.d/jenkins.list > /dev/null
-
-sudo apt-get update -y
-sudo apt-get install jenkins -y
+3. Install Ansible
+```bash
+#Installing Ansible
+sudo apt update
+ sudo apt install software-properties-common
+ sudo apt-add-repository --yes --update ppa:ansible/ansible
+ sudo apt install ansible -y
```
-- Now, access Jenkins Master on the browser on port 8080 and configure it.
-#
-- Create EKS Cluster on AWS (Master machine)
- - IAM user with **access keys and secret access keys**
- - AWSCLI should be configured (Setup AWSCLI)
- ```bash
- curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
- sudo apt install unzip
- unzip awscliv2.zip
- sudo ./aws/install
- aws configure
- ```
- - Install **kubectl** (Master machine)(Setup kubectl )
- ```bash
- curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- sudo mv ./kubectl /usr/local/bin
- kubectl version --short --client
- ```
+After Installing these tools, we will use them to provision EKS cluster and configure master and worker nodes.
+
+- Clone this Repository in the VM to get the access of those terraform files and ansible configurations.
+```bash
+git clone https://github.com/rcheeez/Wanderlust-Mega-Project.git
+```
+
+- Run terraform to provision EKS cluster
+
+```bash
+cd terraform
+terraform init
+terraform plan
+terraform apply
+```
+
+Writing these commands, will provision EKS cluster on AWS Cloud.
- - Install **eksctl** (Master machine) (Setup eksctl)
- ```bash
- curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
- sudo mv /tmp/eksctl /usr/local/bin
- eksctl version
- ```
-
- - Create EKS Cluster (Master machine)
- ```bash
- eksctl create cluster --name=wanderlust \
- --region=us-west-1 \
- --version=1.30 \
- --without-nodegroup
- ```
- - Associate IAM OIDC Provider (Master machine)
- ```bash
- eksctl utils associate-iam-oidc-provider \
- --region us-west-1 \
- --cluster wanderlust \
- --approve
- ```
- - Create Nodegroup (Master machine)
- ```bash
- eksctl create nodegroup --cluster=wanderlust \
- --region=us-west-1 \
- --name=wanderlust \
- --node-type=t2.large \
- --nodes=2 \
- --nodes-min=2 \
- --nodes-max=2 \
- --node-volume-size=29 \
- --ssh-access \
- --ssh-public-key=eks-nodegroup-key
- ```
> [!Note]
-> Make sure the ssh-public-key "eks-nodegroup-key is available in your aws account"
-#
-- Setting up jenkins worker node
- - Create a new EC2 instance (Jenkins Worker) with 2CPU, 8GB of RAM (t2.large) and 29 GB of storage and install java on it
- ```bash
- sudo apt update -y
- sudo apt install fontconfig openjdk-17-jre -y
- ```
- - Create an IAM role with administrator access attach it to the jenkins worker node Select Jenkins worker node EC2 instance --> Actions --> Security --> Modify IAM role
- 
+> Make sure the ssh-public-key "ec2-key-pair" is available in your aws account"
- - Configure AWSCLI (Setup AWSCLI)
- ```bash
- sudo su
- ```
- ```bash
- curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
- sudo apt install unzip
- unzip awscliv2.zip
- sudo ./aws/install
- aws configure
- ```
-#
- - generate ssh keys (Master machine) to setup jenkins master-slave
- ```bash
- ssh-keygen
- ```
- 
-#
- - Now move to directory where your ssh keys are generated and copy the content of public key and paste to authorized_keys file of the Jenkins worker node.
-#
+
+
+
+# Run Ansible to configure master and worker nodes
+
+Let's first configure the hosts file in the ansible directory to point to the master and worker nodes. We will use the IP addresses of the master and worker nodes.
+
+you can locate this file at /etc/ansible/hosts
+
+You have to update this file with this data.
+
+```ini
+[master]
+master_server ansible_host=
+
+[agent]
+worker_server ansible_host=
+
+[all-vars]
+ansible_python_interpreter=/usr/bin/python3
+ansible_user=ubuntu
+ansible_ssh_private_key_file=/home/ubuntu/keys/ #ec2-key-pair.pem
+```
+
+> [!Note]
+> Give a decent access to the private key (.pem) file to run perfectly.
+
+```bash
+sudo chmod 600
+```
+
+Now, let's run the playbook to configure the master and worker nodes.
+
+- Run this command first to install collection of community.docker so that ansible can also create containers inside the vm
+
+```bash
+ansible-galaxy collection install community.docker
+```
+
+- Now run the final command to start the configurations in the VMs
+```bash
+ansible-playbook -i /etc/ansible/hosts master_server_play.yml # to run the configurations in the jenkins master
+ansible-playbook -i /etc/ansible/hosts agent_server_play.yml # to run the configurations in the jenkins worker
+```
+Jenkins Master Configuration
+
+
+
+Jenkins Worker Configuration
+
+> [!Note]
+> Make sure the master and worker nodes are up and running before running the playbook. Also, make sure the ssh-public-key "ec2-key-pair" is available in your aws account.
+
+This will setup all the configuration on those 2 EC2 instances.
+
+After that let's setup the worker instance as a Jenkins Agent to run Jobs.
+
+# Jenkins Worker Setup in Jenkins
- Now, go to the jenkins master and navigate to Manage jenkins --> Nodes, and click on Add node
- name: Node
- type: permanent agent
@@ -198,29 +231,9 @@ sudo apt-get install jenkins -y
- And your jenkins worker node is added

-#
-- Install docker (Jenkins Worker)
-
-```bash
-apt install docker.io -y
-usermod -aG docker ubuntu && newgrp docker
-```
-#
-- Install and configure SonarQube (Master machine)
-```bash
-docker run -itd --name SonarQube-Server -p 9000:9000 sonarqube:lts-community
-```
+After setting up the eks cluster infrastructure on AWS cloud using terraform we can Configure Argo CD by creating Namespace, applying manifests and changing the service Type to NodePort.
#
-- Install Trivy (Jenkins Worker)
-```bash
-sudo apt-get install wget apt-transport-https gnupg lsb-release -y
-wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
-echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
-sudo apt-get update -y
-sudo apt-get install trivy -y
-```
-#
-- Install and Configure ArgoCD (Master Machine)
+- Configure ArgoCD
- Create argocd namespace
```bash
kubectl create namespace argocd
@@ -234,13 +247,7 @@ sudo apt-get install trivy -y
watch kubectl get pods -n argocd
```
- Install argocd CLI
- ```bash
- curl --silent --location -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v2.4.7/argocd-linux-amd64
- ```
- - Provide executable permission
- ```bash
- chmod +x /usr/local/bin/argocd
- ```
+ This has been already installed using Ansible in the Jenkins master machine with executive permissions.
- Check argocd services
```bash
kubectl get svc -n argocd
diff --git a/ansible/agent_server_play.yml b/ansible/agent_server_play.yml
new file mode 100644
index 000000000..6531e2124
--- /dev/null
+++ b/ansible/agent_server_play.yml
@@ -0,0 +1,99 @@
+-
+ name: Install and Configure Java, Docker, AWS CLI, and Trivy on worker instances
+ hosts: agent
+ become: yes
+ tasks:
+ - name: Update Apt Cache
+ apt:
+ update_cache: yes
+
+ - name: Install Java 17
+ apt:
+ name: openjdk-17-jdk-headless
+ state: present
+
+ - name: Install Docker
+ apt:
+ name: docker.io
+ state: present
+
+ - name: Ensure Docker socket is accessible
+ file:
+ path: /var/run/docker.sock
+ mode: '0777'
+ state: touch
+
+ - name: Install Docker Compose
+ apt:
+ name: docker-compose
+ state: present
+
+ - name: Download AWS CLI Installer
+ get_url:
+ url: https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
+ dest: /tmp/awscliv2.zip
+
+ - name: Install Zip and Unzip
+ apt:
+ name:
+ - zip
+ - unzip
+ state: present
+
+ - name: Unzip AWS CLI Installer
+ unarchive:
+ src: /tmp/awscliv2.zip
+ dest: /tmp
+ remote_src: yes
+
+ - name: Install AWS CLI
+ command: /tmp/aws/install
+
+ - name: Configure AWS CLI (Manual Step)
+ debug:
+ msg: "Run 'aws configure' manually to set up AWS credentials."
+
+ - name: Download Kubectl from AWS
+ get_url:
+ url: https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
+ dest: /tmp/kubectl
+ mode: '0755'
+
+ - name: Move Kubectl to /usr/local/bin
+ command: mv /tmp/kubectl /usr/local/bin/kubectl
+
+ - name: Verify Kubectl Installation
+ command: kubectl version --short --client
+ register: kubectl_version
+ changed_when: false
+
+ - name: Display Kubectl version
+ debug:
+ var: kubectl_version.stdout
+
+ - name: Install Dependencies for Trivy
+ apt:
+ name:
+ - wget
+ - apt-transport-https
+ - gnupg
+ - lsb-release
+ state: present
+
+ - name: Add Trivy GPG
+ shell: |
+ wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
+
+ - name: Add Trivy Repository
+ shell: |
+ echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
+
+ - name: Install Trivy
+ apt:
+ name: trivy
+ state: present
+ update_cache: yes
+
+ - name: Print Completed Message
+ debug:
+ msg: "Java, Docker, AWS CLI, and Trivy have been installed and configured on the worker instance."
\ No newline at end of file
diff --git a/ansible/master_server_play.yml b/ansible/master_server_play.yml
new file mode 100644
index 000000000..07256de32
--- /dev/null
+++ b/ansible/master_server_play.yml
@@ -0,0 +1,115 @@
+-
+ name: Install and configure Jenkins, Docker, AWS CLI, kubectl, SonarQube, and ArgoCD on AWS Master Node
+ hosts: master
+ become: yes
+ tasks:
+ - name: Update Apt Cache
+ apt:
+ update_cache: yes
+ state: present
+
+ - name: Install Java 17
+ apt:
+ name: openjdk-17-jdk-headless
+ state: present
+
+ - name: Download Jenkins Repository Key
+ get_url:
+ url: https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
+ dest: /usr/share/keyrings/jenkins-keyring.asc
+
+ - name: Add Jenkins Repository
+ apt_repository:
+ repo: "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/"
+ filename: jenkins.list
+ state: present
+
+ - name: Install Jenkins
+ apt:
+ name: jenkins
+ state: present
+ update_cache: yes
+
+ - name: Enable and Start Jenkins
+ service:
+ name: jenkins
+ state: started
+ enabled: yes
+
+ - name: Install Docker
+ apt:
+ name: docker.io
+ state: present
+
+ - name: Ensure Docker socket is Accessible
+ file:
+ path: /var/run/docker.sock
+ state: touch
+ mode: '0777'
+
+ - name: Install Docker Compose
+ apt:
+ name: docker-compose
+ state: present
+
+ - name: Download AWS CLI Installer
+ get_url:
+ url: https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
+ dest: /tmp/awscliv2.zip
+
+ - name: Install Zip and Unzip
+ apt:
+ name:
+ - zip
+ - unzip
+ state: present
+
+ - name: Unzip AWS CLI Installer
+ unarchive:
+ src: /tmp/awscliv2.zip
+ dest: /tmp
+ remote_src: yes
+
+ - name: Install AWS CLI
+ command: /tmp/aws/install
+
+ - name: Configure AWS CLI (Manual Step)
+ debug:
+ msg: "Run 'aws configure' manually to set up AWS credentials."
+
+ - name: Download Kubectl from AWS
+ get_url:
+ url: https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
+ dest: /tmp/kubectl
+ mode: '0755'
+
+ - name: Move Kubectl to /usr/local/bin
+ command: mv /tmp/kubectl /usr/local/bin/kubectl
+
+ - name: Verify Kubectl Installation
+ command: kubectl version --short --client
+ register: kubectl_version
+ changed_when: false
+
+ - name: Display Kubectl version
+ debug:
+ var: kubectl_version.stdout
+
+ - name: Install Sonar Qube using Docker Container
+ docker_container:
+ name: sonarqube
+ image: sonarqube:lts-community
+ ports:
+ - "9000:9000"
+ state: started
+ restart_policy: always
+
+ - name: Download ArgoCD CLI
+ get_url:
+ url: https://github.com/argoproj/argo-cd/releases/download/v2.4.7/argocd-linux-amd64
+ dest: /usr/local/bin/argocd
+ mode: '0755'
+
+ - name: Print Completetion Message
+ debug:
+ msg: "Jenkins, Docker, AWS CLI, kubectl, SonarQube, and ArgoCD have been installed and configured on the AWS Master Node."
\ No newline at end of file
diff --git a/frontend/package-lock.json b/frontend/package-lock.json
index cb6707954..62daabbeb 100644
--- a/frontend/package-lock.json
+++ b/frontend/package-lock.json
@@ -12,6 +12,7 @@
"axios": "^1.6.1",
"class-variance-authority": "^0.7.0",
"clsx": "^2.0.0",
+ "frontend": "file:",
"lucide-react": "^0.292.0",
"react": "^18.2.0",
"react-dom": "^18.2.0",
@@ -5520,6 +5521,10 @@
"url": "https://github.com/sponsors/rawify"
}
},
+ "node_modules/frontend": {
+ "resolved": "",
+ "link": true
+ },
"node_modules/fs.realpath": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz",
diff --git a/frontend/package.json b/frontend/package.json
index 4b7f5a4b4..8be65be6e 100644
--- a/frontend/package.json
+++ b/frontend/package.json
@@ -16,6 +16,7 @@
"axios": "^1.6.1",
"class-variance-authority": "^0.7.0",
"clsx": "^2.0.0",
+ "frontend": "file:",
"lucide-react": "^0.292.0",
"react": "^18.2.0",
"react-dom": "^18.2.0",
diff --git a/terraform/main.tf b/terraform/main.tf
new file mode 100644
index 000000000..2c4fe4cdc
--- /dev/null
+++ b/terraform/main.tf
@@ -0,0 +1,201 @@
+resource "aws_default_vpc" "default_vpc" {
+
+}
+
+resource "aws_default_subnet" "default_subnet_1" {
+ availability_zone = "ap-south-1a"
+}
+
+resource "aws_default_subnet" "default_subnet_2" {
+ availability_zone = "ap-south-1b"
+}
+
+resource "aws_security_group" "cluster_sg" {
+ name = "cluster-sg"
+ description = "Security group for meet application"
+ vpc_id = aws_default_vpc.default_vpc.id
+
+ ingress {
+ from_port = 22
+ to_port = 22
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ ingress {
+ from_port = 80
+ to_port = 80
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ ingress {
+ from_port = 443
+ to_port = 443
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ ingress {
+ from_port = 10250
+ to_port = 10250
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ ingress {
+ from_port = 30000
+ to_port = 32767
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ egress {
+ from_port = 0
+ to_port = 0
+ protocol = "-1"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ tags = {
+ Name = "cluster_sg"
+ }
+}
+
+resource "aws_security_group" "eks_worker_sg" {
+ name = "eks-worker-sg"
+ description = "Security group for EKS worker nodes"
+ vpc_id = aws_default_vpc.default_vpc.id
+
+ ingress {
+ from_port = 10250
+ to_port = 10250
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ ingress {
+ from_port = 5000
+ to_port = 5000
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ ingress {
+ from_port = 22
+ to_port = 22
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ ingress {
+ from_port = 443
+ to_port = 443
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ egress {
+ from_port = 0
+ to_port = 0
+ protocol = "-1"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+}
+
+resource "aws_iam_role" "eks_cluster_role" {
+ name = "eks-cluster-role"
+ assume_role_policy = jsonencode({
+ Version = "2012-10-17",
+ Statement = [
+ {
+ Effect = "Allow",
+ Principal = {
+ Service = "eks.amazonaws.com"
+ },
+ Action = "sts:AssumeRole"
+ }
+ ]
+ })
+}
+resource "aws_iam_role" "eks_node_role" {
+ name = "eks-node-role"
+ assume_role_policy = jsonencode({
+ Version = "2012-10-17",
+ Statement = [
+ {
+ Effect = "Allow",
+ Principal = {
+ Service = "ec2.amazonaws.com"
+ },
+ Action = "sts:AssumeRole"
+ }
+ ]
+ })
+}
+
+resource "aws_iam_role_policy_attachment" "eks_cluster_policy" {
+ role = aws_iam_role.eks_cluster_role.name
+ policy_arn = var.eks_cluster_policy_arn
+}
+
+resource "aws_eks_cluster" "eks_cluster" {
+ name = var.eks_cluster_name
+ role_arn = aws_iam_role.eks_cluster_role.arn
+
+ vpc_config {
+ subnet_ids = [aws_default_subnet.default_subnet_1.id, aws_default_subnet.default_subnet_2.id]
+ security_group_ids = [aws_security_group.cluster_sg.id]
+ }
+
+ depends_on = [
+ aws_iam_role_policy_attachment.eks_cluster_policy
+ ]
+}
+
+resource "aws_iam_role_policy_attachment" "eks_worker_node_policy" {
+ role = aws_iam_role.eks_node_role.name
+ policy_arn = var.eks_node_policy_arn
+}
+
+resource "aws_iam_role_policy_attachment" "eks_cni_policy" {
+ role = aws_iam_role.eks_node_role.name
+ policy_arn = var.eks_cni_policy_arn
+}
+
+resource "aws_iam_role_policy_attachment" "eks_ecr_read_only_policy" {
+ role = aws_iam_role.eks_node_role.name
+ policy_arn = var.eks_ecr_read_only_policy_arn
+}
+
+resource "aws_eks_node_group" "eks_node_group" {
+ cluster_name = aws_eks_cluster.eks_cluster.name
+ node_group_name = var.eks_node_group_name
+ node_role_arn = aws_iam_role.eks_node_role.arn
+ subnet_ids = [aws_default_subnet.default_subnet_1.id, aws_default_subnet.default_subnet_2.id]
+
+ scaling_config {
+ desired_size = 2
+ max_size = 2
+ min_size = 2
+ }
+
+ remote_access {
+ ec2_ssh_key = var.ec2_ssh_key_name
+ source_security_group_ids = [aws_security_group.eks_worker_sg.id]
+ }
+
+ instance_types = ["${var.eks_node_instance_type}"]
+
+ tags = {
+ Name = var.eks_node_group_name
+ }
+
+ depends_on = [
+ aws_security_group.eks_worker_sg,
+ aws_iam_role_policy_attachment.eks_worker_node_policy,
+ aws_iam_role_policy_attachment.eks_cni_policy,
+ aws_iam_role_policy_attachment.eks_ecr_read_only_policy,
+ aws_eks_cluster.eks_cluster
+ ]
+}
\ No newline at end of file
diff --git a/terraform/providers.tf b/terraform/providers.tf
new file mode 100644
index 000000000..11227067a
--- /dev/null
+++ b/terraform/providers.tf
@@ -0,0 +1,12 @@
+terraform {
+ required_providers {
+ aws = {
+ source = "hashicorp/aws"
+ version = "~> 5.0"
+ }
+ }
+}
+
+provider "aws" {
+ region = var.aws_region
+}
\ No newline at end of file
diff --git a/terraform/variables.tf b/terraform/variables.tf
new file mode 100644
index 000000000..3b598a4dc
--- /dev/null
+++ b/terraform/variables.tf
@@ -0,0 +1,60 @@
+variable "aws_region" {
+ description = "The AWS region to deploy resources"
+ type = string
+ default = "ap-south-1" # Update as needed
+}
+
+variable "ec2_ssh_key_name" {
+ description = "Name of the SSH key pair for EC2 instances"
+ type = string
+ default = "ec2-key-pair"
+}
+
+variable "ami_id" {
+ description = "AMI ID for the EC2 instances"
+ type = string
+ default = "ami-00bb6a80f01f03502" # Update as needed
+}
+
+variable "eks_cluster_name" {
+ description = "Name of the EKS cluster"
+ type = string
+ default = "wanderlust-eks-cluster"
+}
+
+variable "eks_node_group_name" {
+ description = "Name of the EKS node group"
+ type = string
+ default = "wanderlust-node-group"
+}
+
+variable "eks_node_instance_type" {
+ description = "Instance type for the EKS node group"
+ type = string
+ default = "t2.medium"
+}
+
+variable "eks_cluster_policy_arn" {
+ description = "ARN of the EKS cluster policy"
+ type = string
+ default = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
+}
+
+variable "eks_node_policy_arn" {
+ description = "ARN of the EKS node policy"
+ type = string
+ default = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
+}
+
+
+variable "eks_cni_policy_arn" {
+ description = "ARN of the EKS CNI policy"
+ type = string
+ default = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
+}
+
+variable "eks_ecr_read_only_policy_arn" {
+ description = "ARN of the ECR read only policy"
+ type = string
+ default = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
+}