Vercel is the default answer for Next.js deployments — and for good reason. But as your SaaS grows, you might find yourself looking at the bill, dealing with compliance requirements that mandate data residency, or needing workloads that don't fit the serverless model. This guide covers how to containerize a Next.js application with Docker and self-host it, and ends with an honest look at when you should actually do this.
Vercel vs. self-hosting: when does it matter?
Most early-stage SaaS founders should stay on Vercel. The developer experience is unmatched, deployments are instant, and the free tier covers most hobby projects. The cases where self-hosting starts to make sense are:
- Cost at scale. Vercel Pro is $20/month per seat plus usage. At high traffic or with a larger team, a $20/month VPS running Docker can serve the same workload for a fraction of the price.
- Compliance and data residency.HIPAA, SOC 2, and EU data residency requirements sometimes mandate that your infrastructure runs in a specific region or on infrastructure you control. Vercel's enterprise plan covers some of this, but self-hosting gives you full control.
- Long-running processes. Serverless functions have execution time limits. If your app needs to run background jobs, maintain WebSocket connections, or process large files, a containerized server process is a better fit.
Enabling standalone output in Next.js
Before writing a Dockerfile, configure Next.js to produce a standalone build. This bundles only the files needed to run the server — no node_modules folder, no source files — resulting in a much smaller Docker image.
In next.config.ts (or next.config.js):
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
output: "standalone",
};
export default nextConfig;
After building with npm run build, Next.js creates a .next/standalonedirectory containing a minimal Node.js server. This is what you'll copy into your Docker image.
Creating a production Dockerfile
Use a multi-stage build to keep the final image lean. The builder stage installs all dependencies and compiles the app; the runner stage copies only the artifacts needed to run it.
# Dockerfile
FROM node:20-alpine AS base
# --- Dependencies ---
FROM base AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
# --- Builder ---
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# --- Runner ---
FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV PORT=3000
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# Copy standalone output
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public
USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]
Add a .dockerignore to prevent bloating the build context:
.next
node_modules
.env.local
.env*.local
.git
README.md
Managing environment variables in Docker
Never bake secrets into a Docker image. The ENV directive in a Dockerfile ends up in the image layers and is visible to anyone who can pull the image. There are three safe approaches:
Remember: NEXT_PUBLIC_ variables are baked into the client bundle at build time, not at runtime. They must be provided during npm run build, not just when the container starts.
Running with docker-compose
For local development and simple self-hosted setups, docker-compose lets you run the Next.js app alongside a reverse proxy in a single command. Here's a setup using Caddy as the reverse proxy — it handles HTTPS automatically via Let's Encrypt:
# docker-compose.yml
services:
app:
build: .
restart: unless-stopped
env_file: .env.production
expose:
- "3000"
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
depends_on:
- app
volumes:
caddy_data:
caddy_config:
The Caddyfile for reverse proxying to the Next.js container:
yourdomain.com {
reverse_proxy app:3000
}
Run with docker-compose up -d. Caddy will automatically obtain and renew a TLS certificate for your domain. Your Next.js app is now running with HTTPS on a VPS.
Deploying to a VPS
The simplest self-hosted setup is a single VPS (DigitalOcean Droplet, Hetzner CX22, or a Linode). The basic workflow:
# On your VPS
git clone https://github.com/your-org/your-app.git
cd your-app
cp .env.example .env.production # fill in your secrets
docker compose up -d --build
For continuous deployment, add a GitHub Actions workflow that SSH'es into the VPS and runs git pull && docker compose up -d --build on every push to main. Use a deploy key with limited permissions rather than a personal access token.
Why most SaaS founders should still use Vercel
Here's the honest take: if you're building a SaaS from scratch, Vercel is almost certainly the right choice. Here's what self-hosting actually costs you:
- You own uptime. A misconfigured Nginx rule or a full disk takes down your app.
- You manage TLS certificate renewal, security patches, and OS updates. These are not hard, but they take time and they compound.
- Preview deployments, which are invaluable for reviewing PRs, require additional setup on self-hosted infrastructure.
- Vercel's CDN and edge network are legitimately excellent. Replicating that globally is expensive and complex.
Self-host when you have a concrete reason: a compliance requirement, a workload that doesn't fit serverless, or a proven product generating enough revenue that the infrastructure savings are worth the operational overhead. Start on Vercel, migrate when you need to.
Whether you deploy to Vercel or self-host with Docker, the application structure matters more than the deployment target. GetLaunchpad is a Next.js 16 SaaS boilerplate with a production-ready architecture that deploys cleanly to both. Get private repo access and spend your time on your product, not your infrastructure.