hey amigo, i just went through the journey of setting up a production deployment for a typescript stack on a single ec2 instance and i felt to document it, i think it might also help someone else, this isn’t your typical “just deploy it” guide - we’re going to cover everything from initial setup to handling those weird ssl errors that might make you question your career choices.
NOTE please that this article might be a little bit opinionated at some point but you shouldn’t have any trouble.
what we’re building
- express.js api with typescript + drizzle orm
note this could be nest/adonis/fastify and also any other ORM, sequelize, prisma or drizzle, the stack and config is particular and configured in your package.json
- react/vite web app
- PostgreSQL on AWS RDS
- single EC2 instance hosting both applications
- github actions for continuous deployment
- proper ssl certificates
- zero-downtime deployments using PM2
by the end of this article, you’ll have:
- a production-ready deployment setup
- continuous deployment from github
- proper ssl certificate handling
- database connection with proper security
- process management with PM2 in cluster mode
prerequisites
before we delve (paul graham please😀) in, make sure you have:
- an aws account (with proper permissions) i’m using AWS, you can use any other cloud, you might just have to tweak the setups a little
- github repositories for both your api and web app
- domain names ready for both applications (i’m using subdomains for mine)
- basic understanding of linux commands
- patience (you’ll need it)
infrastructure setup
let’s start with the infrastructure. here’s what we need and why:
ec2 instance selection
# recommended specs:
- t3.medium (2 vcpu, 4gb ram)
- ubuntu 22.04 LTS
- 20gb storage minimum
why t3.medium? i initially tried t3.micro and learned the hard way that it’s not enough: (you should adjust based on your needs)
- api needs resources for typescript compilation
- web app needs resources for vite builds
- PM2 will run in cluster mode
- you need headroom for npm installations
PostgreSQL on RDS
# recommended specs:
- db.t3.small (2 vcpu, 2gb ram)
- 200gb storage (adjust based on your needs)
- enable storage autoscaling
pro tip: save these connection details somewhere safe (we’ll need them later):
- endpoint
- port (usually 5432)
- master username
- master password
- database name (default is postgres if you don’t create one)
initial server setup
first, ssh into your ec2 instance:
ssh -i your-key.pem ubuntu@your-ec2-ip
let’s set up our environment:
# first update system packages
sudo apt update && sudo apt upgrade -y
# install node.js 20.x
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs
# install nginx
sudo apt install -y nginx
# install pm2 globally
sudo npm install -g pm2
# create application directories with proper permissions
sudo mkdir -p /var/www/{api,app,certs}
sudo chown -R ubuntu:ubuntu /var/www/
directory structure explained
/var/www/
├── api/ # express api application
├── app/ # react web app
│ └── dist/ # built web app files
└── certs/ # ssl certificates
ssl certificate setup
this is crucial - don’t skip it! first, install certbot:
sudo apt install -y certbot python3-certbot-nginx
get certificates for both domains:
before you run this certbot commands, make sure to have added the domain names to your DNS records, typically A records with the EC2 or your server instance IP address.
sudo certbot --nginx -d api.yourdomain.com
sudo certbot --nginx -d app.yourdomain.com
common issues and solutions:
- “unable to find domain verification”:
# check your dns settings
dig api.yourdomain.com
# should point to your ec2 ip, i explained up there
- “connection refused”:
# check nginx is running
sudo systemctl status nginx
# check ports 80 and 443 are open in your security group
nginx configuration
api configuration
# /etc/nginx/sites-available/api.yourdomain.com
server {
server_name api.yourdomain.com;
location / {
proxy_pass http://localhost:3000; - or your specific app port
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/api.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.yourdomain.com/privkey.pem;
}
web app configuration
# /etc/nginx/sites-available/app.yourdomain.com
server {
server_name app.yourdomain.com;
root /var/www/app/dist;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
# cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ {
expires 30d;
add_header Cache-Control "public, no-transform";
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/app.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.yourdomain.com/privkey.pem;
}
enable the configurations:
sudo ln -s /etc/nginx/sites-available/api.yourdomain.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/app.yourdomain.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
pm2 configuration deep dive
PM2 is crucial for production. here’s our production-ready configuration:
// ecosystem.config.js
module.exports = {
apps: [{
name: 'api',
script: './dist/server.js',
instances: 'max', // use all available cores of your instance
exec_mode: 'cluster', // run in cluster mode
max_memory_restart: '1G',
kill_timeout: 5000, // time to force kill (ms)
wait_ready: true, // wait for ready signal
listen_timeout: 10000, // time to wait for ready signal
error_file: '/var/log/pm2/api-error.log',
out_file: '/var/log/pm2/api-out.log',
merge_logs: true,
log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
env: {
NODE_ENV: 'production',
// other environment variables, this variables will be picked from github actions
}
}]
}
key points about this configuration:
instances: 'max'
: utilizes all cpu coresexec_mode: 'cluster'
: runs in cluster mode mode for better performancewait_ready
: waits for your application to send a ready signal- proper log configuration for debugging
- you can use
pm2 list
to get info about your app status - use
pm2 restart [app name]
orpm2 restart all
to restart all
pm2 monitoring setup:
# install log rotation
pm2 install pm2-logrotate
pm2 set pm2-logrotate:max_size 10M
pm2 set pm2-logrotate:retain 7
# enable startup script
pm2 startup
pm2 save
common pm2 issues you might run into:
- “error: script not found”:
# check if the build exists
ls -la dist/server.js
# check permissions
sudo chown -R ubuntu:ubuntu /var/www/api
- “port already in use”:
# find process using port
sudo lsof -i :3000
# kill if necessary
pm2 delete all
once that’s done now, next thing we’re onto is;
- github actions setup
- handling environment variables securely
- database connection with proper ssl
- continuous deployment workflow
- monitoring and maintenance
now that we’ve covered the initial setup of our ec2 instance, nginx configuration, and pm2 setup. now let’s dive into the more crazy stuff - continuous deployment and database configuration.
github actions setup
first, we need to set up our secrets in github. go to your repository settings → secrets and variables → actions and add these:
# required secrets:
EC2_HOST # your ec2 public ip
EC2_SSH_KEY # your private key for ssh access.
VITE_API_URL # for web app
DATABASE_URL # for api (i'll cover the format later)
# other environment-specific secrets
deploying the api
this is our production-ready workflow file (.github/workflows/deploy.yml
):
name: Deploy API
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v3
- name: Deploy to EC2
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.EC2_HOST }}
username: ubuntu
key: ${{ secrets.EC2_SSH_KEY }}
script: |
echo "📦 starting deployment..."
# clean setup
rm -rf /var/www/api
mkdir -p /var/www/api
cd /var/www/api
# clone repository
git clone --depth 1 https://oauth2:${{ github.token }}@github.com/your-username/your-repo.git .
# install and build
npm ci
npm run build
# pm2 configuration
cat > ecosystem.config.js << 'EOL'
module.exports = {
apps: [{
name: 'api',
script: './dist/server.js',
instances: 'max',
exec_mode: 'cluster',
env: {
NODE_ENV: 'production',
PORT: '3000',
CORS_ORIGIN: 'https://app.yourdomain.com',
# add your other environment variables here
DATABASE_URL: '${{ secrets.DATABASE_URL }}'
}
}]
}
EOL
# restart application
pm2 delete api || true
pm2 start ecosystem.config.js
pm2 save
common deployment issues you might run into
- “permission denied”:
# check directory ownership, i'm using ubuntu as user
sudo chown -R ubuntu:ubuntu /var/www/api
- “npm ci fails”:
# check node version on server
node -v
# ensure it matches your package.json engines,
# you could be using npm install or yarn or bun,
# make sure it matches your workflow
deploying the web app
the web appworkflow (.github/workflows/deploy.yml
in it’s repo):
name: Deploy Web App
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v3
- name: Deploy to EC2
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.EC2_HOST }}
username: ubuntu
key: ${{ secrets.EC2_SSH_KEY }}
script: |
echo "📦 starting deployment..."
rm -rf /var/www/app
mkdir -p /var/www/app
cd /var/www/app
git clone --depth 1 https://oauth2:${{ github.token }}@github.com/your-username/webapp-repo.git .
# create env file with restricted permissions
touch .env
chmod 600 .env
echo "VITE_API_URL=${{ secrets.VITE_API_URL }}" > .env
npm ci or npm/pnpm/bun install
npm run build
the database SSL trouble i faced so you don’t have to face it
connecting to AWS RDS with proper ssl verification is… interesting. here’s how to do it right:
- download the ssl certificate:
cd /var/www/certs
sudo wget https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem
sudo chown -R ubuntu:ubuntu /var/www/certs
sudo chmod 644 /var/www/certs/global-bundle.pem
quick one before you conclude i’m throwing code around, we need to get the global-bundle certificate file from AWS, and here’s why database from RDS is self signed so that mens it has it’s own special ssl certificate, i did a little engineering and digging myself and i fixed it so here’s the thing you would need to pass sslmode=verify-ca or sslmode=verify-full for psql to verify the certificate chain, i used sslmode=verify-full and also had to install the ssl cerificate from AWS in my server, i don’t think AWS RDS uses AWS Trust services for SSL certificates. per the RDS documentation, AWS uses an RDS certificate authority, and you would need to download the root certificates and configure NodeJS to use those for validation. you can get for your specific region but i got the global-bundle own. from here
so after downloading the global-bundle.pem, created a certs folder in the var/www so var/www/certs, added the global-bundle.pem key in there, just touch the file-name, nano or vi into it, whichever one suits you and paste the key content file there. — if you can’t do that, i think you need to stop reading this article now.
then for my database config in my app (code), i set ssl=true, then for my database url in the github secrets, i put ?sslmode=verify-full&sslrootcert=/var/www/certs/global-bundle.pem and that should be it for the certificate installation.
- verify the certificate:
openssl x509 -in /var/www/certs/global-bundle.pem -text -noout
#you should see a preview of the key content. if you do,
#then you are good to go
- update your database connection code:
import { drizzle } from 'drizzle-orm/node-postgres';
import {logger } from 'whateverloggingtool/configyouhave';
import { Pool } from 'pg';
const pool = new Pool({
connectionString: env.DATABASE_URL,
max: 20,
ssl: true
});
// monitoring
pool.on('connect', () => {
logger.debug('Database pool: New client connected');
});
pool.on('error', (err) => {
logger.error('Unexpected error on idle database client', err);
process.exit(-1);
});
export const db = drizzle(pool)
- set your DATABASE_URL in github secrets:
postgresql://[username]:[password]@[your-rds-endpoint]:5432/[dbname]?sslmode=verify-full&sslrootcert=/var/www/certs/global-bundle.pem
common ssl issues you would run into
- “self-signed certificate in certificate chain” - if you did i as i mentioned up there, you shouldn’t run into any issues but in any case you did, go back to reference the solution.
# check certificate permissions
ls -la /var/www/certs/global-bundle.pem
# should be: -rw-r--r--
- “certificate verification failed”:
# verify certificate path in DATABASE_URL
# ensure sslmode is set to verify-full
monitoring and maintenance
log management
set up log rotation for nginx:
sudo nano /etc/logrotate.d/nginx
add this configuration:
/var/log/nginx/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
postrotate
if [ -f /var/run/nginx.pid ]; then
kill -USR1 `cat /var/run/nginx.pid`
fi
endscript
}
backup strategy
ideally, AWS RDS will run backups for you, you can configure that in the RDS UI Console
performance tips
- enable gzip compression in nginx:
gzip on;
gzip_vary on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";
- optimize PM2 for your workload:
{
instances: 'max',
max_memory_restart: '1G',
node_args: '--max-old-space-size=1536'
}
you should be good to go like this now
as you can see, that was a lot. but now you have:
- continuous deployment from github, avoid working on the main branch directly should incase there’s trouble with your code, your deployment will fail, or you could also disable the workflow from running automatically and that means you have to run it manually but that defeats the whole thing yeah, so work with branches today 🫵🏾
- proper ssl certificate handling
- secure database connections
- automated backups
- performance optimization
remember:
- always check your logs (
pm2 logs
) - you could find a web ui to stream your logs into, but don’t do that you weakling. - monitor your resources (
htop
) - keep your certificates up to date
- backup regularly - i believe you’ve done this from your aws console
questions? issues? feel free to reach out! @mosessmax on X or hey@mosess.xyz
PLEASE NOTE, observe the cost, the total of this whole thing is about $80-100 USD for the whole services, while pricing might be subjective, note that this sit into the production budget for this current app requirements and it’ll serve a handful of users.
written while constantly checking pm2 logs and questioning life choices