Node.js and Express: Testing

Testing is a fundamental practice in modern software development that helps ensure code quality, reliability, and maintainability. In the context of Node.js and Express applications, testing provides several critical benefits:
Early Bug Detection: Identify issues before they reach production
Confidence in Refactoring: Safely modify code knowing tests will catch regressions
Documentation: Tests serve as living documentation of how our API should behave
Faster Development: Automated tests are faster than manual testing
Better Design: Writing testable code often leads to better architecture
This article demonstrates how to implement testing in a Node.js and Express application using the built-in Node.js test runner, Supertest for HTTP assertions, and Faker for generating test data.
Testing Stack Overview
Node.js Test Runner
Starting with version 18, Node.js includes a built-in test runner that provides:
Zero Dependencies: No need for external test frameworks.
Native Integration: First-class support from the Node.js team.
Modern API: Supports async/await and promises natively.
Test Organization: Provides
describe,it, and hooks likebeforeEachandafterEach.Built-in Assertions: Includes the
assertmodule for making assertions.Fast Execution: Optimized for performance
Supertest
Supertest is an HTTP assertion library that makes testing Express applications straightforward:
Fluent API: Chainable methods for making requests and asserting responses.
Express Integration: Works seamlessly with Express applications.
No Server Startup Required: Can test our Express app without manually starting a server.
Comprehensive Assertions: Built-in methods for checking status codes, headers, and response bodies.
Faker
Faker generates realistic test data:
Diverse Data Types: Can generate names, emails, addresses, numbers, and more.
Consistency: Supports seeding for reproducible test data.
Localization: Supports multiple locales for region-specific data.
Realistic Data: Produces data that mimics real-world scenarios better than hardcoded values.
Project Structure
The testing setup in the reference repository follows this structure:
tests/
├── setup.js # Test configuration and teardown
├── todos/
├── addTodo.test.js # Tests for adding todos
├── checkTodo.test.js # Tests for checking todos
├── findTodo.test.js # Tests for finding todos
├── uncheckTodo.test.js # Tests for unchecking todos
└── todoDsl.js # Domain-specific language for tests
Installation
First, install the required testing dependencies:
npm install --save-dev supertest @faker-js/faker
Separating App from Server
A crucial aspect of making our Express application testable is separating the app configuration from the server startup. This allows Supertest to create test instances without actually starting the server. The application is split into two files: app.js and server.js. Create the app.js file with the following content:
import express from 'express';
import dotenv from 'dotenv';
import todosRoutes from './features/todos/routes.js';
import healthRoutes from './routes/health.js';
import { errorHandler, NotFoundError } from './middlewares/errorHandler.js';
import morgan from 'morgan';
import expressWinston from 'express-winston';
import logger from './config/logger.js';
import helmet from 'helmet';
import cors from 'cors';
import swaggerRoutes from './routes/swagger.js';
dotenv.config();
const app = express();
app.use(helmet());
app.use(
cors({
origin: process.env.ALLOWED_ORIGIN,
})
);
app.use(express.json());
app.use(morgan('dev'));
app.use(
expressWinston.logger({
winstonInstance: logger,
msg: 'HTTP {{req.method}} {{req.url}} {{res.statusCode}} {{res.responseTime}}ms',
})
);
app.use('/api-docs', swaggerRoutes);
app.use('/health', healthRoutes);
app.use('/api/todos', todosRoutes);
app.all('/*splat', (req, res, next) => {
const pathSegments = req.params.splat;
const fullPath = pathSegments.join('/');
next(new NotFoundError(`The requested URL /${fullPath} does not exist`));
});
app.use(
expressWinston.errorLogger({
winstonInstance: logger,
msg: '{{err.message}} {{res.statusCode}} {{req.method}}',
})
);
app.use(errorHandler);
export default app;
Configuration Only: This file sets up middleware, routes, and error handlers, but doesn't start the server
Export the App: The Express app is exported as the default export.
No
app.listen(): Crucially, there's no call to start listening on a port.Environment Variables: Loaded via dotenv for both development and testing.
Next, update the server.js file as follows:
import app from './app.js';
const PORT = process.env.PORT || 3000;
process.on('uncaughtException', err => {
console.error(err.name, err.message);
process.exit(1);
});
const server = app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
process.on('unhandledRejection', err => {
console.error(err.name, err.message);
server.close(() => {
process.exit(1);
});
});
Server Startup Only: This file is responsible solely for starting the HTTP server.
Imports the App: Gets the configured Express app from
app.js.Production Entry Point: This is what runs when we start our application normally.
Not Used in Tests: Tests import
app.jsdirectly, bypassing this file.
Handling Authentication in Tests
The auth.js file shows how to bypass authentication during testing:
export function verifyJWT(options) {
return (req, res, next) => {
if (process.env.NODE_ENV === 'test') {
req.user = { sub: 'test', email: 'test@example.com' };
return next();
}
// ... authentication logic
};
}
In the test environment, authentication is bypassed.
This allows testing business logic without dealing with token generation.
The
req.userobject is populated with test data.Production code remains unchanged.
Database Configuration for Testing
When testing applications that interact with databases, it's crucial to have a proper database configuration that works across different environments. The knexfile.js handles this configuration.
import dotenv from 'dotenv';
dotenv.config();
const config = {
development: {
client: 'pg',
connection: process.env.CONNECTION_STRING,
migrations: {
directory: './migrations',
extension: 'js',
},
pool: {
min: 2,
max: 10,
},
},
test: {
client: 'pg',
connection: process.env.CONNECTION_STRING,
migrations: {
directory: './migrations',
extension: 'js',
},
pool: {
min: 2,
max: 10,
},
},
};
export default config;
Environment-Specific Configurations: Separate configurations for development and test environments allow different database settings
Connection String: Reads from environment variables, allowing different databases for different environments
Test Setup and Teardown
The setup.js file handles global test configuration:
import { after } from 'node:test';
import db from '../src/config/database.js';
after(async () => {
await db.destroy();
});
The
afterhook runs once after all tests complete.The
db.destroy()closes all database connections in the pool. Without this, the Node.js process would wait indefinitely for connections to close.
Creating a Test DSL (Domain-Specific Language)
A test DSL provides reusable functions that abstract common testing operations, allowing for more efficient and consistent testing. The todoDsl.js file demonstrates this pattern:
import request from 'supertest';
import app from '../../src/app.js';
import { faker } from '@faker-js/faker';
import assert from 'node:assert';
export const randomTodo = () => {
return {
title: faker.lorem.sentence(15),
};
};
export const addTodo = async (todo, errors) => {
const response = await request(app).post('/api/todos').send(todo);
if (errors === undefined) {
assert.strictEqual(response.status, 201);
assert(response.body.id);
} else {
assert.strictEqual(response.status, 400);
assert.deepStrictEqual(response.body.detail, errors);
}
return response.body;
};
export const findTodo = async (todoId, error) => {
const response = await request(app).get(`/api/todos/${todoId}`).send();
if (error === undefined) {
assert.strictEqual(response.status, 200);
assert(response.body.id);
} else {
assert.strictEqual(response.status, 404);
assert.strictEqual(response.body.detail, error);
}
return response.body;
};
export const checkTodo = async (todoId, error) => {
const response = await request(app).post(`/api/todos/${todoId}/check`).send();
if (error === undefined) {
assert.strictEqual(response.status, 200);
assert(response.body.id);
} else {
assert(
response.status === 404 || response.status === 400,
`Expected status 404 or 400, but got ${response.status}`
);
assert.strictEqual(response.body.detail, error);
}
return response.body;
};
export const uncheckTodo = async (todoId, error) => {
const response = await request(app)
.post(`/api/todos/${todoId}/uncheck`)
.send();
if (error === undefined) {
assert.strictEqual(response.status, 200);
assert(response.body.id);
} else {
assert(
response.status === 404 || response.status === 400,
`Expected status 404 or 400, but got ${response.status}`
);
assert.strictEqual(response.body.detail, error);
}
return response.body;
};
In the randomTodo function, instead of hardcoded values like "Task 1", we use Faker to generate realistic data. This makes tests more robust and catches issues that only appear with varied data.
Writing Test Cases
Time to write the tests using our previously created DSL. The file addTodo.tests.js contains the following content:
import '../setup.js';
import { test, describe } from 'node:test';
import assert from 'node:assert';
import { addTodo, randomTodo } from './todoDsl.js';
describe('Add todos', () => {
test('should create a new todo', async () => {
const todo = randomTodo();
const result = await addTodo(todo);
assert.strictEqual(result.title, todo.title);
assert.strictEqual(result.completed, false);
assert(result.created_at);
});
test('should throw an error for invalid todo', async () => {
const todo = {
title: '',
};
await addTodo(todo, ['title is a required field']);
});
});
The findTodo.test.js file contains just one test:
import '../setup.js';
import { test, describe } from 'node:test';
import { addTodo, randomTodo, findTodo } from './todoDsl.js';
describe('Find todo', () => {
test('should return an existing todo', async () => {
const todo = randomTodo();
const result = await addTodo(todo);
await findTodo(result.id);
});
});
This test demonstrates isolation. It first creates a todo, then verifies it can be retrieved. This ensures tests don't depend on pre-existing database state. The checkTodo.test.js and uncheckTodo.test.js validate state transitions:
import '../setup.js';
import { test, describe } from 'node:test';
import { addTodo, randomTodo, checkTodo } from './todoDsl.js';
describe('Check todo', () => {
test('should mark todo as completed', async () => {
const todo = randomTodo();
const result = await addTodo(todo);
await checkTodo(result.id);
});
test('should throw an error for non-existent todo', async () => {
const nonExistentId = '01982e32-b58f-7051-a9ed-61e1f125b07c';
await checkTodo(nonExistentId, 'Todo not found');
});
test('should throw an error for invalid todo ID format', async () => {
const invalidId = 'invalid-id';
await checkTodo(
invalidId,
'The provided todoId does not match the UUIDv7 format'
);
});
});
import '../setup.js';
import { test, describe } from 'node:test';
import { addTodo, randomTodo, checkTodo, uncheckTodo } from './todoDsl.js';
describe('Uncheck todo', () => {
test('should mark todo as not completed', async () => {
const todo = randomTodo();
const result = await addTodo(todo);
await checkTodo(result.id);
await uncheckTodo(result.id);
});
test('should uncheck an already unchecked todo', async () => {
const todo = randomTodo();
const result = await addTodo(todo);
await uncheckTodo(result.id);
});
test('should throw an error for non-existent todo', async () => {
const nonExistentId = '01982e32-b58f-7051-a9ed-61e1f125b07c';
await uncheckTodo(nonExistentId, 'Todo not found');
});
test('should throw an error for invalid todo ID format', async () => {
const invalidId = 'invalid-id';
await uncheckTodo(
invalidId,
'The provided todoId does not match the UUIDv7 format'
);
});
});
Running Tests
Update the package.json file to include a test script:
{
"scripts": {
"test": "cross-env NODE_ENV=test node --test tests"
}
}
Execute tests using npm:
npm test
The Node.js test runner will:
Discover all test files matching the pattern.
Execute tests in parallel (by default).
Display results with detailed output.
Exit with appropriate code (0 for success, 1 for failure).
You can find all the code here. Thanks, and happy coding.




