This project is a full-stack application using Docker to containerize and manage the components. It includes a Vite + React frontend, a Node.js API server, and a MySQL database. The setup is designed for both development and production environments, with detailed steps to deploy the application on AWS Elastic Beanstalk.
client: Frontend client with Vite + React.server: Backend API server using Node.js.nginx: Web server and reverse proxy using Nginx.database: MySQL database configuration
- Docker
- Node.js
- npm
- git
Create the main project directory and navigate into it:
mkdir my-project
cd my-project
Within this directory, we will create three subdirectories for our client, API server, and Nginx web server:
mkdir client
mkdir server
mkdir nginx
- Create the Vite + React application:
npm create vite@latest client --template react
cd client
- Install dependencies:
npm install
npm install axios --save
npm install --save-dev vitest jsdom @testing-library/jest-dom @testing-library/react @testing-library/user-event
-
Configure the development server in
vite.config.js:Add the server configuration block to the file.
export default defineConfig({
// other configurations
server: {
host: '0.0.0.0',
port: 3000
},
});
- Replace
App.jsxApp() with the following code:
function App() {
const [response, setResponse] = useState([
{ id: 1, data: "default data #1" },
{ id: 2, data: "default data #2" },
{ id: 3, data: "default data #3" }
]);
const [counter, setCounter] = useState(0);
return (
<>
<button onClick={() => setCounter(count => count + 1)}>
{response[counter % response.length].data}
</button>
</>
);
}
- Setup Testing Environment: ( Vite + React doesn't have a default testing setup )
- Inside
package.jsoninclude in scripts this code:
"test": "vitest run"
- Create a directory called
testsinside theclientdirectory.- Inside
tests, create a file calledsetup.js.- Add the following code into the file:
import { afterEach } from 'vitest'
import { cleanup } from '@testing-library/react'
import '@testing-library/jest-dom/vitest'
afterEach(() => {
cleanup();
})
- Add to
vite.config.js:
export default defineConfig({
// other configurations
test: {
environment: 'jsdom',
globals: true,
setupFiles: './tests/setup.js'
}
});
- Inside client
srcdirectory, add a file calledApp.test.jsx- Add the following code into the file:
import { render, screen } from '@testing-library/react'
import App from './App'
describe('App', () => {
it('renders the App component', () => {
render(<App />)
screen.debug();
})
})
- In your terminal, while inside client directory, run the test setup:
npm run test
- Navigate to the server directory and initialize npm:
cd ../server
npm init -y
- Install default packages:
npm install express body-parser cors mysql2 nodemon
- Include necessary files inside server directory:
- Create
keys.jsand add this code into it into the file:
module.exports = {
mysqlUser: process.env.MYSQLUSER,
mysqlHost: process.env.MYSQLHOST,
mysqlDatabase: process.env.MYSQLDATABASE,
mysqlPassword: process.env.MYSQLPASSWORD,
mysqlPort: process.env.MYSQLPORT,
}
- Create
index.jsand add this code into the file:
const keys = require('./keys');
// Express App Setup
const express = require('express');
const bodyParser = require('body-parser');
const cors = require('cors');
const app = express();
app.use(cors());
app.use(bodyParser.json());
// MySQL Client Setup
const mysql = require('mysql2/promise');
const pool = mysql.createPool({
host: keys.mysqlHost,
user: keys.mysqlUser,
password: keys.mysqlPassword,
database: keys.mysqlDatabase,
port: keys.mysqlPort,
});
// MySQL Pool Connection Test
pool.getConnection((err, connection) => {
if (err) {
console.log('Error connecting to MySQL: ', err);
} else {
console.log('Connected to MySQL');
}
});
// MySQL Pool Connection Test
pool.getConnection((err, connection) => {
if (err) {
console.log('Error connecting to MySQL: ', err);
} else {
console.log('Connected to MySQL');
}
});
// MySQL low-level migration
pool
.query(
`CREATE TABLE IF NOT EXISTS tbl_test (
id INT AUTO_INCREMENT PRIMARY KEY,
data VARCHAR(255));`
)
.then(
pool.query(
`INSERT INTO tbl_test (data) VALUES ('data #1'), ('data #2'), ('data #3');`
)
)
.catch((err) => {
console.log('Error creating table: ', err);
});
// ***********************
// QUERIES AND ROUTES HERE
// ***********************
// 5000 is default, you may change as needed
const PORT = process.env.PORT || 5000;
app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`);
});
- Add scripts to package.json for development and start:
"scripts": {
// other scripts
"dev": "nodemon index.js",
"start": "node index.js"
}
- Navigate to the nginx directory
- Create a file called
default.confthen add the following code into the file:
upstream client {
server client:3000;
}
upstream api {
server api:5000;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
location /api {
rewrite ^/api(/.*)$ $1 break;
proxy_pass http://api;
}
}
- Inside
clientdirectory, createDockerfile.devand add the following code into the file:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
- Inside
serverdirectory do the same this with this code:
FROM node:alpine
WORKDIR '/usr/src/app'
COPY package.json .
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm", "run", "dev"]
- Inside
nginxdirectory do the same thing as well this with this code:
FROM nginx
COPY default.conf /etc/nginx/conf.d/default.conf
- Inside your
rootdirectory, createdocker-compose-dev.ymland add the follow code into the file:
version: "3"
services:
client:
build:
context: ./client
dockerfile: Dockerfile.dev
volumes:
- ./client:/app
- /app/node_modules
ports:
- "3000:3000"
environment:
- CHOKIDAR_USEPOLLING=true
nginx:
restart: always
build:
context: ./nginx
dockerfile: Dockerfile.dev
ports:
- "4000:80"
depends_on:
- client
- api
api:
build:
context: ./server
dockerfile: Dockerfile.dev
volumes:
- ./server:/usr/src/app
- /usr/src/app/node_modules
ports:
- "5000:5000"
environment:
- CHOKIDAR_USEPOLLING=true
- MYSQLUSER=root
- MYSQLHOST=database
- MYSQLDATABASE=mydb
- MYSQLPASSWORD=password
- MYSQLPORT=3306
depends_on:
- database
database:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=mydb
ports:
- "3306:3306"
volumes:
- db-data:/var/lib/mysql
volumes:
db-data:
- Run using your terminal while inside the
rootdirectory to see if everything is working: ( Expect initial errors to be raised )
docker-compose -f docker-compose-dev.yml up --build
- Create a GitHub Repository and push your current working code into the remote.
- For
client:
- Create a
nginxdirectory inside theclientdirectory and create adefault.conffile.- Add the following code into the file:
server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index/html index.htm;
try_files $uri $uri/ /index.html;
}
}
- Create a
Dockerfileinside theclientdirectory.- Add the following code into the file:
FROM node:alpine as builder
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/dist /usr/share/nginx/html
- For
server:
- Create a
Dockerfileinside theserverdirectory.- Add the following code into the file:
FROM node:alpine
WORKDIR '/usr/src/app'
COPY package.json .
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm", "run", "start"]
- For
nginx:
- Create a
Dockerfileinside thenginxdirectory.- Add the following code into the file:
FROM nginx
COPY default.conf /etc/nginx/conf.d/default.conf
- Inside your
rootdirectory, create a.travis.ymlfile and add the following code.
sudo: required
services:
- docker
before_install:
- docker build -t pgsoncada/vite-test -f ./client/Dockerfile.dev ./client
script:
- docker run pgsoncada/vite-test npm test
after_success:
- docker build -t pgsoncada/practice-client ./client
- docker build -t pgsoncada/practice-nginx ./nginx
- docker build -t pgsoncada/practice-server ./server
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
- docker push pgsoncada/practice-client
- docker push pgsoncada/practice-nginx
- docker push pgsoncada/practice-server
- Go to your
Travisaccount and include your current working repository into yourTravisrepositories. - Once added, go to
More optionsand head over toEnvironmental Variables. - Set 2
Environmental Variablesas follows:
- DOCKER_PASSWORD = $your_docker_hub_password
- DOCKER_ID = $your_docker_hub_username
- Now commit your current directory and push into your working repository and test to see if your images have been sent over to your
Docker Hubrepository.
🚩 If this is your first time deploying
- Go to your
AWS Management Consoleand findIAM. - At your left side panel, under
Access managementnavigate toRoles. - Click
Create roleand selectAWS serviceforTrusted entity typeselection. - For
Use caseselectEC2then clickNext. - Type
AWSElasticBeanstalkin the search bar and check the following policies:AWSElasticBeanstalkWebTierAWSElasticBeanstalkWorkerTierAWSElasticBeanstalkMulticontainerDocker
- Name it
aws-elasticbeanstalk-ec2-role, then clickCreate role.
- Go to your
AWS Management Consoleand findElastic Beanstalk. - Click
Create Application, then set your application name. - Scroll down to find the
Platformsection, then select Docker asPlatform. - If you are using free-tier, set your
Configuration presentstoSingle instance (free tier eligible). - Click Next, then you will have to
Configure service access. - If this is your first time deploying, then for
Service role, selectCreate and use new service roleand name your service role appropriately. ( aws-elasticbeanstalk-service-role ) - Otherwise, just select
Use and existing service role. - For
EC2 instance profile, select the one you previously created. - Click
Skip to review, clickSubmit, then continue while you wait for your new Elastic Beanstalk application to be created and launch.
- Go to your
AWS Management Consoleand findRDS. - At your left side panel, navigate to
Databases. - Click
Create databaseand set th following:- Choose a database creation method → Standard create
- Engine options → MySQL
- Templates → Depends on the user ( For now Free tier )
- DB cluster identifier → Chosen Database Identifier
- Master username → Chosen Database Access Username
- Credentials management → Self managed
- Master password → Chosen Database Access Password
- VPC → Default VPC
- Find and unhide Additional Configuration
- Initial database name → Chosen Initial Database Name
- Go to the bottom and press Create Database
- Continue while you wait for your new RDS application to finish creating.
- Go to your
AWS Management Consoleand findVPC. - At your left side panel, under
Security, navigate toSecurity groups. - Click
Create security group, and set an appropriateSecurity group nameandDescription. - Make sure VPC is set to default VPC.
- Scroll down and click
Create Security Group. - After creating your security group, find and click
Edit inbound rules. - Click
Add ruleand setPort rangeto3000-5000. - Set your
Sourceto the Security Group you just created. - Lastly, click
Save rules.
- Navigate back to
RDSthenDatabases. - Select your created database and click
Modify. - Under the
Connectivitysection, include inSecurity groupthe one you recently created. - Scroll down and click
ContinuethenModify DB instance.
- Navigate back to
Elastic Beanstalkthen to your Application's Environment. - At your left side panel, navigate to
Configuration. - Go to
Instance traffic and scalingsection then press `Edit. - Go to
EC2 security groupsand include your created security group. - Scroll down and click
Apply.
- Navigate back to
Configuration, go toUpdates, monitoring, and loggingsection, then clickEdit. - Navigate to
Environment propertiesand include the following properties:- MYSQLUSER ->
Chosen Database Access Username - MYSQLPASSWORD ->
Chosen Database Access Password - MYSQLHOST -> Locate endpoint at your created RDS under
Connectivity & Security - MYSQLDATABASE ->
Chosen Initial Database Name - MYSQLPORT ->
3306
- MYSQLUSER ->
- Navigate to
IAMin yourAWS Management Console. - At your left side panel, under
Access management, navigate toUsers. - Click
Create userand provide an appropriate Username. - Under
Permissions options, selectAttach policies directly. - In the search bar, type beanstalk, and include
AdministratorAccess-AWSElasticBeanstalk. - Scroll down, click
Next, then clickCreate user. - Select your created user and find
Security credentials. - Find
Access keysand then clickCreate access key. - For use case, select
Command Line Interface (CLI), tick the confirmation box, clickNext, thenCreate access key. - Save your Access key and Secret access key to be used in Travis CI Environment.
- Inside your root directory, create a
docker-compose.ymlfile and add the following code:
version: '3'
services:
client:
image: 'pgsoncada/practice-client'
mem_limit: 128m
hostname: client
server:
image: 'pgsoncada/practice-server'
mem_limit: 128m
hostname: api
environment:
- MYSQLUSER=$MYSQLUSER
- MYSQLHOST=$MYSQLHOST
- MYSQLDATABASE=$MYSQLDATABASE
- MYSQLPASSWORD=$MYSQLPASSWORD
- MYSQLPORT=$MYSQLPORT
nginx:
image: 'pgsoncada/practice-nginx'
mem_limit: 128m
hostname: nginx
ports:
- '80:80'
- Inside your
.travis.ymlfile, include the following code underafter_success:
deploy:
provider: elasticbeanstalk
region: "your_aws_cloud_region"
app: "your-application-name"
env: "your-application-environment-name"
bucket_name: "your-s3-bucket-name"
bucket_path: "your-preferred-bucket-path"
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
- Include your
AWS keysintoTravis Environment Variables.
- Go to your
Travis CIactive repositories.- Find your current working repository and go to
More options>Settings.- Find the
Environment Variablessection and include the following variables:
- AWS_ACCESS_KEY -> Your AWS Access Key
- AWS_SECRET_KEY -> Your recently created AWS Secret Access Key
- To test is everything is working, simply push all of your changes in your project directory file into your GitHub Repository