Foundational Principles
Architecture for systems that need to outlast their original authors, frameworks, and assumptions — the timeless principles that survive every technology shift.
Architecture for systems that need to outlast their original authors, frameworks, and assumptions — the timeless principles that survive every technology shift.
A trend-driven system follows the architectural fashion of the moment: today's microservices, yesterday's three-tier ESB, tomorrow's mesh-something-or-other. Each cycle requires a partial rewrite when the fashion shifts. The architecture has no opinion about anything except its current technology stack — and it shows.
A foundational system is built on principles that survive technology shifts. The codebase looks recognisable to architects from 1985 and 2025: clear module boundaries, explicit dependencies, conceptual integrity, decisions documented and revisited. Frameworks come and go; the design philosophy persists. When the team rewrites the persistence layer in five years, they preserve the principles and replace only the implementation.
The architectural shift is not "we use the latest tools." It is: we make decisions our successors can defend, change, and extend without rewriting the system.
Splitting code into files is not modularity. Modularity is hiding design decisions that are likely to change behind interfaces that are stable. The most-cited paper in software engineering — Parnas (1972) — said this fifty years ago. Most codebases still get it wrong: they split by technology layer (controllers, services, repositories) and call themselves modular, while every change still ripples through every layer.
Pick a recent change request that turned out to be larger than expected. How many modules did it touch? If the answer is "most of them," your modules are organised by file type, not by what changes together.
David Parnas's On the Criteria To Be Used in Decomposing Systems into Modules (1972) is the foundational text. Robert C. Martin's Single Responsibility Principle (the "S" in SOLID) operationalises it for object-oriented systems.
The Dependency Inversion Principle says high-level policy should not depend on low-level mechanism — the mechanism should depend on the policy. In practice this means the volatile parts of the system (frameworks, databases, vendor APIs) depend on the stable parts (domain logic, business rules), never the reverse. This is what makes a system survive the inevitable replacement of its frameworks and databases.
import statements pointing at frameworks, ORMs, or vendor SDKs.Open a domain class at random. Does it import anything from your web framework, your ORM, or a vendor SDK? If yes, the dependency is flowing the wrong way.
Robert C. Martin's Clean Architecture (2012) is the canonical statement. Alistair Cockburn's Hexagonal Architecture (Ports and Adapters) (2005) is the same idea expressed geometrically. Both rest on Parnas's earlier work.
Fred Brooks called conceptual integrity the most important consideration in system design. A system that does fewer things consistently is more useful, more learnable, and more maintainable than one that does more things inconsistently. Every feature added in a way that conflicts with the system's design philosophy is a tax paid forever — by users learning special cases, by engineers maintaining exceptions, by future architects untangling the contradiction.
Could a senior engineer who has never seen the system predict, with reasonable accuracy, how a new feature should behave by reading the existing code? If not, the conceptual integrity has eroded — even if every individual feature works.
Fred Brooks, The Mythical Man-Month (1975), Chapter 4: "Aristocracy, Democracy, and System Design". Brooks argued that conceptual integrity requires architectural authority, not just process — a position still under debate.
Every undocumented assumption is a future bug looking for an excuse. The classes you don't name, the invariants you don't enforce, the contracts you don't write down — these become the source of incidents the on-call engineer can't reproduce. Foundational architectures surface assumptions in code, not in tribal knowledge.
Pick three production incidents from the last quarter. How many of them involved an assumption that was true in someone's head but not enforced anywhere in the system? That count is your implicit-assumption budget being spent.
Eric Evans, Domain-Driven Design (2003), Chapter 9 — "Making Implicit Concepts Explicit". The Pythonic principle "explicit is better than implicit" (PEP 20 — The Zen of Python) is the same principle in different vocabulary.
The purpose of architecture is not to be correct on day one; it is to be cheap to change on every day after. Decisions that are easy to reverse can afford to be made quickly with limited information. Decisions that are hard to reverse deserve the time and care of a major investment. Treating all decisions identically — either all rushed or all over-deliberated — wastes the team's most expensive resource: judgement.
Look at the last five major architectural decisions. Were they classified by reversibility before commitment? Or were they all treated as either trivial or terrifying — with nothing in between?
The "two-way doors" framing comes from Jeff Bezos's 1997 shareholder letter. Operationalised in Neal Ford, Rebecca Parsons, and Patrick Kua's Building Evolutionary Architectures (Thoughtworks, 2017) — which introduces architectural fitness functions as the verification mechanism. Michael Nygard's Documenting Architecture Decisions is the canonical pattern for capturing the decisions themselves.
Every novel technology in the stack is an "innovation token" spent. Tokens are scarce. They should be spent on what actually differentiates the business — not on the database, message bus, or deployment system. Boring technology is well-understood, has long-tail debugging support, has stable hiring pipelines, and has predictable failure modes. These are competitive advantages, not concessions.
Count the distinct database engines, message brokers, and language runtimes in your production system. For each one beyond the first, can a current engineer name the failure modes, replication semantics, and the on-call playbook? If not, you have technology you don't actually own.
Dan McKinley's Choose Boring Technology (2015) introduced the innovation-tokens framing. The principle has older roots in Frederick Brooks's "no silver bullet" essay (1986) and in Linus Torvalds's repeated insistence that boring infrastructure is what makes interesting software possible.
The diagram below shows the canonical foundational structure: dependencies flow from the volatile periphery (frameworks, vendors) inward toward the stable core (domain logic), with adapters mediating at every boundary and architectural governance gating decisions.
Building abstractions before understanding the second use case. The first use case is not data; the second is. Abstractions designed from a single example are usually wrong, expensive to undo, and lock in assumptions that don't generalise. The cost of a missing abstraction is small; the cost of the wrong abstraction is large.
Wait for the second concrete use case before extracting an abstraction. Use copy-paste once, even if it feels uncomfortable; the duplication makes the right abstraction visible. Sandi Metz's rule of thumb: "duplication is far cheaper than the wrong abstraction."
Letting "we'll fix it later" become the architecture. Every shortcut taken under deadline pressure compounds; eventually the architecture is whatever the deadlines allowed. The team didn't choose the design — they accepted it through inattention.
Architecture is decided continuously, not in big-bang reviews. Allocate explicit refactoring capacity (Google's "20% rule for engineering health" or equivalent). Make the cost of architectural shortcuts visible — in dashboards, in retros, in the on-call burden — so the trade-off is conscious.
Treating frameworks as architecture. When the framework changes (and it will), the team rebuilds the system from scratch because the architecture lived inside the framework's choices, not above them. The codebase becomes a record of what was popular when each module was written.
Frameworks are tools, not foundations. The domain layer should not import the framework. New frameworks should be adopted as adapters, not as architecture. When a framework reaches end-of-life, the rewrite should touch the adapter layer only, not the core.
Applying GoF, microservices, or CQRS patterns mechanically because they appeared in a talk, without the context that made them appropriate. Patterns are answers to specific problems; without the problem, the pattern is just complexity.
Before adopting a pattern, articulate the problem it solves in your context. If you can't, you don't need the pattern yet. When in doubt, prefer the simpler structure; you can refactor toward a pattern when the need is concrete.
Important architectural choices made and forgotten — with no record of why. Six months later, a new engineer "fixes" a careful trade-off that was made deliberately, and the system regresses. The cost is not the original decision; it's the repeated relearning.
Capture decisions in ADRs at the moment they're made, with context, options considered, and rationale. Revisit ADRs when the context changes. The cost of an ADR is fifteen minutes; the cost of relitigating decisions for years is thousands of engineering hours.
Open your codebase tree. Are the top-level folders named by what changes (Booking, Pricing, Identity) or by what it's made of (Controllers, Services, Repositories)? The first decomposition makes change cheap; the second makes change expensive everywhere at once.
Search the domain layer for imports of your web framework, ORM, or vendor SDKs. Each one is a coupling between business logic and replaceable infrastructure. The domain should not know what HTTP is.
Pick a public class. Read its public methods. Could a caller use it without knowing the internal data structures, the database schema, or the algorithm? If not, the abstraction has leaked and changes propagate further than they should.
A design philosophy is what's true regardless of the feature being built — "we prefer composition," "we never call cross-context databases directly," "we surface invariants in types." Without a written philosophy, every team member invents their own and the system fragments.
ADRs capture context, options considered, and the rationale for the choice. They cost fifteen minutes when fresh. They save weeks when a future engineer would otherwise relitigate the decision blind, six months later, on the wrong evidence.
Two-way doors (the API name, the cache key format) can be made fast and revisited. One-way doors (the data model, the security boundary, the public contract) deserve broader review. Treating both alike either burns time on trivia or rushes the irreversible.
Postgres works. The dominant cloud works. Your existing language works. A new technology should answer "what does this enable that the boring default cannot?" If the honest answer is "nothing critical," it's an innovation token spent on the wrong thing.
Walk a business workflow from request to response. Does the path through the code match how a domain expert would describe it? If the code path zigzags through unrelated layers, the conceptual model and the code have diverged — and the next change will be harder than the last.
Fitness functions are automated checks that prevent architectural erosion: "domain layer must not import infrastructure," "no cross-context database access," "p99 latency stays under 200ms." Without them, every architectural rule is enforced by goodwill — which doesn't survive deadlines.
Architectures rot when only their authors review them. Schedule external review — a senior engineer from another team, an architect from another practice, an alumni who knows the domain. Fresh eyes catch the assumptions you can no longer see.
Other substantive pages in the library that link here: