Most teams reach for microservices before they've earned them. The result is a distributed monolith: all the complexity of a distributed system with none of the benefits. Here's a framework for deciding when monoliths, modular monoliths, and true distributed systems each make sense.

The Monolith Isn't the Problem

Somewhere around 2016, "monolith" became a dirty word. Conference talks treated single-deployment architectures like legacy baggage. Teams that had perfectly functional Rails or Express apps started splitting them into services because that's what the blog posts said to do.

The irony is that most of the companies held up as microservices success stories -- Netflix, Amazon, Uber -- started as monoliths and only split when they hit genuine scaling walls. Their microservices architectures are the result of specific, measurable pain points, not architectural fashion.

If you can't build a well-structured monolith, what makes you think microservices are the answer?

Simon Brown

The monolith has real advantages that are easy to forget. A single deployment unit means a single CI/CD pipeline, a single log stream, a single database transaction boundary. Refactoring is an IDE operation, not a cross-team negotiation. New developers can run the entire system on their laptop in minutes.

Architecture Is a Spectrum

The choice isn't binary. Between "everything in one process" and "every feature is its own service" lies a range of options. Understanding where your system falls on this spectrum -- and where it should fall -- is the real architectural question.

Architecture diagram showing spectrum from monolith to microservices
Fig. 1 -- The architecture spectrum. Most systems belong somewhere in the middle.

Here's how I think about it. Each step along the spectrum trades simplicity for a specific capability. The question is whether you actually need that capability today, or whether you're paying a complexity tax on a future that may never arrive.

  • Monolith -- Single deployment, shared database, in-process communication. Best for small teams and early-stage products.
  • Modular monolith -- Single deployment, but with strict module boundaries, explicit APIs between modules, and the ability to extract services later. Best for growing teams.
  • Service-oriented -- A handful of coarse-grained services aligned with business domains. Shared infrastructure, independent deployments. Best for medium organizations.
  • Microservices -- Many small, independently deployable services with separate data stores. Best for large organizations with mature DevOps practices.

The Modular Monolith: Best of Both

The modular monolith is the most underrated architecture in software. You get the deployment simplicity of a monolith with the internal structure that makes future extraction possible. The key is enforcing module boundaries at compile time, not just by convention.

Here's what a well-structured modular monolith looks like in a Node.js context. Each module exposes a public API and hides its implementation details:

TypeScript src/modules/billing/index.ts
// Public API -- the only thing other modules can import
export { BillingService } from './billing.service';
export { CreateInvoiceDTO, InvoiceResponse } from './billing.types';

// Everything else is internal:
// ./repositories/invoice.repository.ts
// ./entities/invoice.entity.ts
// ./events/invoice-created.event.ts
// These are never exported from the module root.

The BillingService class itself depends only on abstractions, making it testable in isolation and extractable into its own service later:

TypeScript src/modules/billing/billing.service.ts
import { Injectable } from '@nestjs/common';
import { EventBus } from '../../shared/events';
import { InvoiceRepository } from './repositories/invoice.repository';
import { CreateInvoiceDTO, InvoiceResponse } from './billing.types';
import { InvoiceCreated } from './events/invoice-created.event';

@Injectable()
export class BillingService {
  constructor(
    private readonly invoices: InvoiceRepository,
    private readonly events: EventBus,
  ) {}

  async createInvoice(
    dto: CreateInvoiceDTO
  ): Promise<InvoiceResponse> {
    const invoice = await this.invoices.create({
      customerId: dto.customerId,
      lineItems: dto.lineItems,
      dueDate: dto.dueDate,
    });

    // Communicate via events, not direct calls
    await this.events.publish(
      new InvoiceCreated(invoice.id, invoice.total)
    );

    return this.toResponse(invoice);
  }
}

Notice the event-based communication. The billing module doesn't call the notification module directly. It publishes an event that any interested module can subscribe to. Today, that event bus is in-process. Tomorrow, it could be RabbitMQ or Amazon EventBridge. The module code doesn't change.

When to Actually Split

After years of working on systems that split too early and systems that split too late, I've developed a simple heuristic. You should consider extracting a service when at least two of these are true:

  1. Independent scaling needs. One module needs 10x the compute resources of others, and you're wasting money scaling everything together.
  2. Independent release cadence. One team needs to deploy multiple times daily while others deploy weekly, and they're blocking each other.
  3. Technology boundary. A module genuinely benefits from a different language, runtime, or data store, not because of preference but because of measurable characteristics.
  4. Organizational boundary. Two teams in different time zones own different modules and the coordination cost of shared deployments exceeds the complexity cost of separate services.

The first rule of distributed systems is don't distribute your system. The second rule is that you probably still shouldn't.

Fred Lackey, after the third rewrite

The Migration Path

When it is time to extract a service, the modular monolith structure makes the path clear. Here's the approach I've used successfully on three production systems:

Step 1: Verify the Boundary

Before extracting anything, verify that the module boundary is clean. Every dependency should flow through the public API. No module should reach into another module's database tables. A quick way to validate this:

Bash scripts/check-boundaries.sh
#!/bin/bash
# Find any imports that bypass module public APIs

MODULE="billing"

# This should return zero results
grep -r "from.*modules/${MODULE}/" src/ \
  --include="*.ts" \
  | grep -v "modules/${MODULE}/" \
  | grep -v "modules/${MODULE}/index"

if [ $? -eq 0 ]; then
  echo "VIOLATION: Direct imports into ${MODULE} internals"
  exit 1
else
  echo "OK: All imports go through public API"
fi

Step 2: Replace In-Process Events with a Message Broker

Swap the in-memory event bus for a real message broker. Because modules already communicate via events, this is a configuration change, not a refactor. The module code stays identical.

Message broker architecture diagram
Fig. 2 -- Replacing the in-process event bus with a message broker. Module code is unchanged.

Step 3: Deploy Independently

Once events flow through a broker, the module can be deployed as a standalone service. It keeps its own database, processes its own events, and exposes its public API over HTTP or gRPC instead of in-process calls. The consuming modules switch from direct imports to HTTP clients, ideally generated from an OpenAPI spec to keep type safety.

TypeScript src/clients/billing.client.ts
// Generated from billing service OpenAPI spec
import { Configuration, BillingApi } from '@internal/billing-client';

const config = new Configuration({
  basePath: process.env.BILLING_SERVICE_URL,
  accessToken: process.env.SERVICE_TOKEN,
});

export const billingClient = new BillingApi(config);

// Usage is nearly identical to the direct import:
// Before: billingService.createInvoice(dto)
// After:  billingClient.createInvoice(dto)

What I'd Do Today

If I were starting a new project today with a team of 3-8 developers, I would start with a modular monolith every time. Not because I don't believe in microservices, but because the modular monolith gives you the option to go either direction based on evidence rather than speculation.

The teams I've seen succeed with distributed systems all followed a similar path: they built something simple, ran it in production until they understood the real bottlenecks, and then made targeted architectural changes to address those specific constraints. The teams that struggled were the ones who designed for problems they hadn't encountered yet.

Start with structure. Split when it hurts. Measure before you migrate.