Every SaaS ships bugs. The question is whether you catch them before your users do. A solid test suite is the difference between a confident deployment and a fingers-crossed one. This guide covers the three layers of testing for Next.js applications — unit, integration, and end-to-end — and explains what to skip so you don't waste time testing the wrong things.
Why testing matters for SaaS
When you're the only developer, you can hold the whole system in your head. Once you start moving fast — adding Stripe webhooks, swapping auth providers, refactoring shared utilities — things break in ways you don't expect. Tests give you a regression net: a way to prove that the thing that worked last week still works today, without manually clicking through every flow before every deploy.
For SaaS specifically, the highest-value things to test are the paths that touch money: checkout, subscription status checks, and webhook handling. A bug in your billing logic can mean customers on free plans with active subscriptions — or worse, paying customers locked out of their account.
Unit testing with Vitest
Vitest is the recommended unit test runner for Next.js projects. It's faster than Jest, has near-identical API surface, and integrates cleanly with Vite-based toolchains. Install it alongside the React testing utilities:
npm install -D vitest @vitejs/plugin-react jsdom @testing-library/react @testing-library/jest-dom
Add a vitest.config.ts at the project root:
import { defineConfig } from "vitest/config";
import react from "@vitejs/plugin-react";
import path from "path";
export default defineConfig({
plugins: [react()],
test: {
environment: "jsdom",
setupFiles: ["./vitest.setup.ts"],
globals: true,
},
resolve: {
alias: { "@": path.resolve(__dirname, ".") },
},
});
Create vitest.setup.ts:
import "@testing-library/jest-dom";
Unit tests shine on pure utility functions — things with no side effects. A good example is a price formatting helper or a date calculation:
// lib/__tests__/format.test.ts
import { describe, it, expect } from "vitest";
import { formatPrice } from "../format";
describe("formatPrice", () => {
it("formats cents as dollars", () => {
expect(formatPrice(2900)).toBe("$29.00");
});
it("handles zero", () => {
expect(formatPrice(0)).toBe("$0.00");
});
});
Integration testing: API routes with mocked services
Integration tests cover your API route handlers — the logic between the HTTP request and the database write. The key is mocking your external dependencies (Supabase, Stripe, Clerk) so your tests run fast and offline, without hitting real APIs.
Vitest has built-in mocking. Here's how to test a route handler that requires auth and writes to Supabase:
// app/api/user/__tests__/route.test.ts
import { describe, it, expect, vi, beforeEach } from "vitest";
import { POST } from "../route";
// Mock Clerk
vi.mock("@clerk/nextjs/server", () => ({
auth: vi.fn().mockResolvedValue({ userId: "user_123" }),
currentUser: vi.fn().mockResolvedValue({
emailAddresses: [{ emailAddress: "test@example.com" }],
}),
}));
// Mock the Supabase admin client
const mockUpsert = vi.fn().mockResolvedValue({ error: null });
vi.mock("@/lib/supabase/admin", () => ({
adminClient: {
from: () => ({ upsert: mockUpsert }),
},
}));
describe("POST /api/user", () => {
beforeEach(() => vi.clearAllMocks());
it("upserts the user and returns 200", async () => {
const res = await POST();
expect(mockUpsert).toHaveBeenCalledWith(
{ clerk_id: "user_123", email: "test@example.com" },
{ onConflict: "clerk_id" }
);
expect(res.status).toBe(200);
});
});
The same pattern applies to Stripe webhook handlers — mock stripe.webhooks.constructEvent to return a synthetic event payload, then assert that your database update logic ran correctly.
End-to-end testing with Playwright
E2E tests run a real browser against your running application. They're slower and more brittle than unit tests, but they catch a class of bugs that no amount of mocking can: UI regressions, broken navigation, and JavaScript errors in the browser.
npm install -D @playwright/test
npx playwright install
Create playwright.config.ts:
import { defineConfig } from "@playwright/test";
export default defineConfig({
testDir: "./e2e",
use: {
baseURL: "http://localhost:3000",
},
webServer: {
command: "npm run dev",
url: "http://localhost:3000",
reuseExistingServer: true,
},
});
For SaaS, the most valuable E2E test is the checkout flow. Even a basic smoke test that verifies the pricing page loads and the checkout button exists catches a surprising number of regressions:
// e2e/checkout.spec.ts
import { test, expect } from "@playwright/test";
test("pricing page shows checkout button", async ({ page }) => {
await page.goto("/pricing");
const button = page.getByRole("link", { name: /get started/i });
await expect(button).toBeVisible();
});
test("unauthenticated checkout redirects to sign-in", async ({ page }) => {
await page.goto("/dashboard");
await expect(page).toHaveURL(/sign-in/);
});
For checkout flows involving real Stripe, use Stripe's test mode and test card numbers (4242 4242 4242 4242). Never run E2E tests against your live Stripe account.
What NOT to test
Equally important is knowing what to skip. Time spent testing implementation details is time not spent building product.
- Third-party SDKs.Don't test that Stripe's SDK correctly creates a checkout session — Stripe tests their own SDK. Test that your code calls it with the right arguments.
- UI snapshots of third-party components. Snapshot-testing a Clerk
<SignIn /> component will break every time Clerk ships a UI update. - TypeScript types.If it compiles, the types are correct. Don't write tests that assert type relationships — that's what
tsc is for. - Database schema. Test your query logic, not that Supabase returns data in a shape you expect. Your Supabase client types already enforce that.
Running tests in CI with GitHub Actions
Tests only catch regressions if they run on every pull request. Here's a minimal GitHub Actions workflow that runs unit and integration tests on every push:
# .github/workflows/test.yml
name: Tests
on:
push:
branches: [main]
pull_request:
jobs:
unit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- run: npm ci
- run: npm run test
typecheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- run: npm ci
- run: npx tsc --noEmit
Add a test script to your package.json:
"scripts": {
"test": "vitest run",
"test:watch": "vitest"
}
E2E tests in CI require a running server and are more expensive. Run them in a separate workflow that only triggers on merges to main, or on a nightly schedule — not on every pull request.
A pragmatic testing strategy
For a solo SaaS founder or small team, the 80/20 approach is: write unit tests for any utility function you'll call from more than one place, write integration tests for your Stripe webhook handler and any route that writes to the database, and write one or two E2E smoke tests for the most critical user flows. Add more tests when something breaks in production — retroactively, so the same bug can't happen twice.
The testing setup described in this guide — Vitest, type-checking in CI, and a sensible project structure — is pre-configured in GetLaunchpad, a Next.js 16 SaaS boilerplate. Get private repo access and ship your product with confidence from day one.