How to Deploy a Node.js App to Production in 2026
A complete guide to deploying Node.js applications to production — covering containerization, CI/CD, auto-scaling, monitoring, and common pitfalls that kill apps at scale.
RaidFrame Team
September 16, 2025 · 4 min read
Deploying a Node.js app to production is not the same as running node server.js on a VPS. Production means uptime guarantees, zero-downtime deploys, auto-scaling under load, and observability when things go wrong.
This guide covers the full path from local development to production-grade deployment.
Containerize first
Every production Node.js app should run in a container. No exceptions. Docker gives you reproducibility, isolation, and a clean deployment artifact.
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
EXPOSE 3000
CMD ["node", "dist/server.js"]Key points:
- Multi-stage builds keep your image small (no dev dependencies in production)
- Alpine base cuts image size from ~900MB to ~150MB
npm ciensures deterministic installs from lockfile
Environment configuration
Never hardcode secrets. Use environment variables injected at runtime.
const config = {
port: process.env.PORT || 3000,
dbUrl: process.env.DATABASE_URL,
redisUrl: process.env.REDIS_URL,
nodeEnv: process.env.NODE_ENV || "production",
};On RaidFrame, environment variables are set per-service and encrypted at rest. They're injected at container start, never baked into the image.
Health checks
Your platform needs to know if your app is alive. Add a health endpoint:
app.get("/health", (req, res) => {
res.status(200).json({
status: "ok",
uptime: process.uptime(),
timestamp: Date.now(),
});
});Configure your platform to hit this endpoint every 10-30 seconds. If it fails 3 times consecutively, the container gets replaced.
Auto-scaling
A single instance handles roughly 1,000-5,000 concurrent connections depending on your workload. Beyond that, you need horizontal scaling.
On RaidFrame, auto-scaling is configured per service:
- Min instances: 2 (for high availability)
- Max instances: 20 (cost ceiling)
- Scale trigger: CPU > 70% or response time > 500ms
The platform handles load balancing, connection draining, and rolling deploys automatically.
Zero-downtime deploys
Rolling deployments mean new instances spin up before old ones shut down. Your app needs graceful shutdown:
process.on("SIGTERM", async () => {
console.log("SIGTERM received, shutting down gracefully");
server.close(() => {
// Close database connections
// Flush logs
process.exit(0);
});
// Force exit after 30s
setTimeout(() => process.exit(1), 30000);
});Monitoring and observability
You need three things in production:
- Logs — Structured JSON logging with request IDs for tracing
- Metrics — Response times, error rates, throughput, CPU/memory
- Alerts — PagerDuty/Slack notifications when error rate spikes or latency degrades
const logger = {
info: (msg, meta) =>
console.log(JSON.stringify({ level: "info", msg, ...meta, ts: new Date().toISOString() })),
error: (msg, meta) =>
console.error(JSON.stringify({ level: "error", msg, ...meta, ts: new Date().toISOString() })),
};Try RaidFrame free
Deploy your first app in 60 seconds. No credit card required.
Common production pitfalls
Memory leaks: Node.js apps that slowly consume more memory over hours/days. Use --max-old-space-size and monitor heap usage.
Unhandled rejections: Always catch promise rejections. In Node 20+, unhandled rejections crash the process by default.
Connection pool exhaustion: If you're connecting to PostgreSQL, set a connection pool limit that matches your instance count. 20 instances with 10 connections each = 200 connections to your database.
Cold starts: If your app takes 10+ seconds to start (loading ML models, warming caches), configure longer health check grace periods.
Deployment checklist
- App containerized with multi-stage Docker build
- Environment variables externalized
- Health check endpoint responding
- Graceful shutdown handling SIGTERM
- Auto-scaling configured with min 2 instances
- Structured logging enabled
- Error tracking connected (Sentry, etc.)
- Database connection pooling configured
- SSL/TLS termination at load balancer
- CI/CD pipeline running tests before deploy
Deploy in 60 seconds
On RaidFrame, the entire process is:
rf init
rf deployThat's it. The CLI detects your Node.js app, builds the container, pushes it to the registry, and deploys it with auto-scaling, health checks, and SSL — all configured automatically.
No Kubernetes YAML. No Terraform. No 47-step GitHub Actions workflow. Just deploy.
Ship faster with RaidFrame
Auto-scaling compute, managed databases, global CDN, and zero-config CI/CD. Free tier included.