Composable Configuration Patterns
Every application starts with a simple .env file. A handful of
key-value pairs: database URL, API key, port number. It works. Then the application
grows. You need different values for development, staging, and production. You add
a second file. Then a third. Then you realize half your "configuration" is actually
business logic masquerading as environment variables.
This note explores a pattern I've refined over years of building distributed systems: layered, composable configuration that scales from a single-file prototype to a multi-environment production deployment without ever requiring a rewrite of the config layer itself.
The Problem With Flat Configuration
The standard .env approach has a fundamental limitation: it's flat.
Every value exists at the same level of specificity. There's no concept of
inheritance, defaults, or scoping. When you need to express "use this database in
production, but this one in staging, and this one locally," you end up with one of
two bad patterns:
- Multiple
.envfiles (.env.local,.env.staging,.env.production) with massive duplication between them - A single file where someone has helpfully commented out the "other" values, leading to merge conflicts and deployment accidents
Both of these approaches violate the DRY principle at the infrastructure level. And both get worse as the team grows. I learned this lesson the hard way on a project where we had 47 environment variables and three environments—that's 141 values to keep synchronized, most of which were identical across environments.
Configuration should be structured like CSS: specific rules override general ones, and anything not explicitly overridden inherits from the layer below.
The Composable Layer Model
The pattern I've settled on uses four distinct layers, each with a clear purpose and a strict merge order. This mirrors how I think about interface thinking—the contract between layers matters more than the implementation of any single layer.
Layer 1: Application Defaults
The base layer lives in code. These are the values that make sense for the application in its most generic context. They are always version-controlled and always present.
// config/defaults.js
export const defaults = {
server: {
port: 3000,
host: '0.0.0.0',
gracefulShutdownMs: 10000,
},
database: {
pool: { min: 2, max: 10 },
timeout: 5000,
ssl: false,
},
logging: {
level: 'info',
format: 'json',
redactKeys: ['password', 'token', 'secret'],
},
};
Notice: no URLs, no credentials, no secrets. Only structural configuration that defines the shape of the application's expectations. This layer answers "what does the app need?" without answering "where does it find it?"
Layer 2: Environment Profile
The second layer maps to deployment environments. It overrides defaults with environment-specific values. This is where boring deploys become possible—each environment is a well-defined, version-controlled profile.
// config/environments/production.js
export const production = {
server: {
gracefulShutdownMs: 30000,
},
database: {
pool: { min: 5, max: 50 },
ssl: true,
},
logging: {
level: 'warn',
},
};
Only the values that differ from defaults are specified. Everything else is inherited. This is the core principle: override only what changes.
Layer 3: External Secrets
Credentials and connection strings come from the runtime environment—injected by your CI/CD pipeline, a secrets manager, or the orchestrator. They never appear in source control.
// config/secrets.js
export const secrets = {
database: {
url: env('DATABASE_URL'),
},
auth: {
jwtSecret: env('JWT_SECRET'),
oauthClientId: env('OAUTH_CLIENT_ID'),
},
};
Layer 4: Local Overrides
The final layer is developer-specific. It's gitignored. It exists so that each
person on the team can tweak their local setup without affecting anyone else.
This is where you set logging.level: 'debug' or point at a local
database.
The Merge Strategy
Layers are deep-merged in order. Later layers override earlier ones. The merge function itself is surprisingly small:
function deepMerge(target, ...sources) {
for (const source of sources) {
for (const key of Object.keys(source)) {
if (isPlainObject(source[key]) && isPlainObject(target[key])) {
target[key] = deepMerge({ ...target[key] }, source[key]);
} else {
target[key] = source[key];
}
}
}
return target;
}
// Usage:
const config = deepMerge(
{},
defaults,
environments[env('NODE_ENV')],
secrets,
localOverrides
);
Validate Early, Fail Fast
The merged configuration object should be validated at startup. Not at the point of use. Not lazily. At boot time. If a required value is missing, the application should refuse to start with a clear error message explaining exactly what's wrong.
"A configuration error discovered at startup is an inconvenience. A configuration error discovered at 3 AM in production is a crisis."
I use a schema validation library (like Zod in the Node.js ecosystem) to define the expected shape and types. The validator runs once, at startup, and the resulting typed config object is frozen and passed through dependency injection.
In Practice
I've used this pattern in production systems ranging from a single Express server to a 12-service microservices deployment managed by Docker Compose. The pattern scales without modification because the complexity is in the data, not the mechanism.
The key lesson: treat your configuration like you treat your code. Give it structure. Give it types. Give it tests. And above all, make it composable— because the one thing you can guarantee about your deployment topology is that it will change.
🌿 Growth History
🔗 Backlinks
8 other notes reference this one