Node.js has consistently ranked as the most-used non-browser technology in the Stack Overflow Developer Survey for six consecutive years, with 42.7% of developers choosing it in 2023. Pair it with TypeScript — adopted by over 43% of JavaScript developers (Stack Overflow 2023) — and you get one of the most battle-tested stacks for building backend APIs that scale. This guide walks through the decisions, patterns, and code that separate a prototype from a system ready for production traffic.
Why Node.js and TypeScript Are the Default Stack for Scalable APIs
The combination works for three compounding reasons.
Non-blocking I/O at the core. Node.js's event loop handles thousands of concurrent connections without spawning new threads, making it ideal for I/O-heavy APIs — database reads, external service calls, file operations. Where a thread-per-request model exhausts memory under load, Node.js queues work and continues accepting connections.
Type safety that grows with the codebase. TypeScript catches interface mismatches and null reference errors at compile time. In a single-developer project this is convenient; in a team of ten consuming the same API it is essential. The compiler becomes a reviewer that never takes time off.
Ecosystem depth. npm hosts over 2.1 million packages as of 2024, with mature libraries covering authentication, distributed tracing, schema validation, connection pooling, and every infrastructure concern in between. You rarely need to build infrastructure primitives yourself.
Project Setup: TypeScript-First From Day One
Starting with a strict TypeScript configuration avoids painful refactors later. Create the project:
mkdir adyantrix-api && cd adyantrix-api
npm init -y
npm install express
npm install --save-dev typescript @types/node @types/express ts-node-dev
Configure TypeScript strictly (tsconfig.json):
{
"compilerOptions": {
"target": "ES2022",
"module": "CommonJS",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"noImplicitAny": true,
"strictNullChecks": true,
"esModuleInterop": true,
"skipLibCheck": true
},
"include": ["src"],
"exclude": ["node_modules", "dist"]
}
The strict: true flag enables six TypeScript checks in one — noImplicitAny, strictNullChecks, strictFunctionTypes, and more. This is non-negotiable for production code. It eliminates entire classes of runtime errors that only surface under load, long after deployment.
Organise your source into clear layers:
src/
routes/ # Express route definitions
controllers/ # Request/response handling
services/ # Business logic
models/ # TypeScript interfaces and types
middleware/ # Auth, validation, error handling
config/ # Environment and constants
server.ts # Entry point
Each layer has a single responsibility. Controllers stay thin — they parse the request, call the service, and return the response. Services own the logic. Models define the contracts.
Designing for Scalability: Principles Before Code
API design decisions made on day one become technical debt or competitive advantages by year two.
Resource-based URLs with versioning
GET /api/v1/projects
POST /api/v1/projects
GET /api/v1/projects/:id
PATCH /api/v1/projects/:id
DELETE /api/v1/projects/:id
Versioning via path (/v1/) rather than headers keeps URLs debuggable and simplifies routing logic. It also allows you to run v1 and v2 simultaneously while clients migrate at their own pace.
Typed request and response contracts
// src/models/project.ts
export interface CreateProjectRequest {
name: string;
clientId: string;
deadline: string; // ISO 8601
services: string[];
}
export interface ProjectResponse {
id: string;
name: string;
clientId: string;
status: 'active' | 'completed' | 'on-hold';
createdAt: string;
}
export interface PaginatedResponse<T> {
data: T[];
pagination: {
page: number;
pageSize: number;
total: number;
totalPages: number;
};
}
Defining these interfaces creates a living contract. When the model changes, the compiler tells you every call site that breaks — before a single test runs.
Pagination from the start. Adding pagination to an existing endpoint in production is a breaking change for clients who never expected a wrapper object. Build it into the response shape before you have data volumes that demand it.
Middleware: The Backbone of a Maintainable API
Express middleware handles cross-cutting concerns — authentication, validation, error handling, request logging — without cluttering route handlers. A production-ready error handler:
// src/middleware/errorHandler.ts
import { Request, Response, NextFunction } from 'express';
export class AppError extends Error {
constructor(
public statusCode: number,
public message: string,
public isOperational = true
) {
super(message);
Object.setPrototypeOf(this, AppError.prototype);
}
}
export function errorHandler(
err: Error,
req: Request,
res: Response,
_next: NextFunction
): void {
if (err instanceof AppError) {
res.status(err.statusCode).json({
status: 'error',
message: err.message,
});
return;
}
console.error('Unhandled error:', err);
res.status(500).json({
status: 'error',
message: 'Internal server error',
});
}
The AppError class distinguishes operational errors (bad input, resource not found) from programmer errors (null dereferences, uncaught exceptions). Operational errors return clean, informative messages to clients. Programmer errors trigger monitoring alerts and return a generic 500 — you never leak stack traces to the outside world.
Register the handler last in your middleware chain, after all routes:
// src/server.ts
app.use('/api/v1', projectRoutes);
app.use(errorHandler); // always last
Scalability Patterns: From Single Server to Production Load
Once the API is functional, these patterns bridge the gap between a working prototype and a system that handles real traffic.
Connection pooling. Database connections are expensive to establish. Without pooling, each request opens and closes a connection — catastrophic under concurrent load. With pg (PostgreSQL):
import { Pool } from 'pg';
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
});
export default pool;
A pool of 20 connections serves hundreds of concurrent requests without overwhelming the database, because most requests spend the majority of their time waiting for query results — not holding open connections.
In-memory caching with Redis. For read-heavy endpoints — configuration data, reference lists, product catalogues — Redis caching eliminates redundant database round-trips:
import { createClient } from 'redis';
const redis = createClient({ url: process.env.REDIS_URL });
await redis.connect();
async function getCachedOrFetch<T>(
key: string,
ttlSeconds: number,
fetcher: () => Promise<T>
): Promise<T> {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached) as T;
const data = await fetcher();
await redis.setEx(key, ttlSeconds, JSON.stringify(data));
return data;
}
A 60-second TTL on a heavily-read endpoint reduces database queries from one per request to one per minute for that resource — measurable relief on database CPU and connection count.
Horizontal scaling with PM2. Node.js runs on a single thread by default. PM2's cluster mode forks one process per CPU core:
npm install -g pm2
pm2 start dist/server.js -i max --name "api"
pm2 save
Combined with a load balancer (AWS ALB, NGINX), throughput scales linearly with cores. A four-core instance effectively becomes four independent API servers behind a single entry point.
Choosing the Right Framework: A Comparison
Express is the most widely deployed Node.js framework, but it is not always the best fit. Here is how the main options compare for TypeScript REST APIs:
| Framework | TypeScript Support | Throughput (req/s) | Learning Curve | Best Fit |
|---|---|---|---|---|
| Express | Via @types | ~15,000 | Low | Flexibility, large existing ecosystem |
| Fastify | Native schemas | ~30,000 | Medium | High-throughput, schema-validated APIs |
| NestJS | Native decorators | ~18,000 | High | Enterprise teams, opinionated structure |
| Hono | Native | ~50,000+ | Low | Edge/serverless, minimal overhead |
Express remains the practical default for teams with existing Node.js experience. Fastify is the right call when throughput benchmarks matter and you want built-in JSON schema validation. NestJS pays dividends in large teams where enforced structure reduces onboarding time and code review overhead.
Testing: Validate Contracts, Not Just Units
An API that passes unit tests can still fail in integration. For scalable REST APIs, the priority order is:
- Integration tests — test the full HTTP layer against a real database (Docker Compose is the standard tool for this)
- Contract tests — validate that response shapes match the TypeScript interfaces at runtime
- Load tests — use Artillery or k6 to find the breaking point before production does
// src/__tests__/projects.test.ts
import request from 'supertest';
import app from '../app';
describe('GET /api/v1/projects', () => {
it('returns paginated results with the correct shape', async () => {
const res = await request(app)
.get('/api/v1/projects?page=1&pageSize=10')
.set('Authorization', `Bearer ${testToken}`);
expect(res.status).toBe(200);
expect(res.body).toHaveProperty('data');
expect(res.body).toHaveProperty('pagination.total');
expect(Array.isArray(res.body.data)).toBe(true);
});
});
Never mock the database in integration tests. The queries, indices, and constraint behaviour are exactly what you need to validate — and they are the first things to diverge when a schema migration goes wrong in production.
Frequently Asked Questions
What is the difference between REST and GraphQL for scalable APIs? REST is simpler to cache at the CDN and load-balancer level, easier to secure via standard HTTP semantics, and aligns naturally with resource-based thinking. GraphQL is better when diverse clients need flexible data shapes and you want to eliminate over-fetching. For most B2B APIs, REST with well-designed endpoints is the pragmatic default; GraphQL adds measurable value when you have multiple client types with conflicting data requirements.
How many requests per second can a Node.js API handle? A single Node.js process on a four-vCPU server typically handles 10,000–20,000 simple requests per second. With PM2 cluster mode, this scales roughly linearly with CPU cores. In practice, the bottleneck is almost always the database — which is why connection pooling and caching deliver more throughput gain than any Node.js tuning.
Should I use TypeScript decorators (NestJS-style) or plain functions? Decorators reduce boilerplate in large teams and enforce consistent project structure, which is why NestJS is popular in enterprise settings. Plain functions (Express/Fastify style) are easier to debug, test in isolation, and reason about. For teams under ten engineers, plain functions with clear folder conventions are usually faster to move with and easier to onboard into.
When should I add an API gateway? When you have two or more independently deployed services. A gateway (AWS API Gateway, Kong, Traefik) centralises authentication, rate limiting, and routing at the infrastructure level. For a single API, it adds operational overhead without proportional benefit — handle rate limiting and auth in middleware instead.
How do I handle breaking API changes without disrupting existing clients?
Path-based versioning (/v1/, /v2/) is the most client-friendly approach. Keep the previous version live until traffic drops below a meaningful threshold — typically under 5% — then deprecate with a Sunset response header before removal. Enterprise clients have release cycles that do not move quickly; communicate deprecation timelines at least 90 days in advance.
Conclusion
Building a scalable REST API with Node.js and TypeScript is not a single decision — it is a series of deliberate choices that compound: strict TypeScript configuration, layered architecture, typed contracts, connection pooling, caching, horizontal scaling, and integration testing. Teams that make these choices early ship faster, debug less, and scale without rewrites.
At Adyantrix, our engineering teams design and deliver backend systems built for production from the first commit — whether that means a greenfield Node.js API, a TypeScript migration of an existing codebase, or a cloud-native deployment on AWS or Azure. If you are planning a new API or need to scale an existing one, speak with our custom software development team to see how we approach it.



