👋 Hey {{first_name|there}},
Your team followed clean architecture to the letter, and now a single-field change touches seven files across four layers. Here's how to measure whether your abstractions are earning their keep.
Why this matters / where it hurts
A few years ago, I joined a team that had done everything "right." Hexagonal architecture. Ports and adapters. Domain layer isolated from infrastructure. The dependency arrows all pointed inward. The diagram on the wiki looked beautiful.
Then someone asked us to add a field to an API response.
One new field. We touched the domain model, the domain service, the application service, the response DTO, the mapper between domain and DTO, the integration test fixtures, and the API contract test. Seven files. The pull request took two days to review because reviewers had to verify that the same value threaded correctly through four layers of transformation. And the thing is, nobody questioned it. We'd internalized the cost as "the price of clean code."
That price has a name. I call it the abstraction tax: the operational overhead your team pays every time they navigate, debug, or change code that passes through layers which exist for architectural correctness rather than actual need. In Lesson #34 on the distributed monolith (https://www.techarchitectinsights.com/p/the-distributed-monolith-is-worse-than-the-monolith), we talked about how structural decisions made for "good reasons" can quietly compound into something worse than the problem they solved. The abstraction tax is the same pattern, one level down. It lives inside your services, not between them.
🧭 The shift
From: More abstraction layers mean better architecture.
To: Every layer is a liability until it proves it's an asset.
Clean architecture, hexagonal architecture, onion architecture. They're not wrong. But they're patterns, not laws. The original authors described them as tools for specific contexts: large teams, long-lived systems with genuinely distinct deployment boundaries, domains where business logic must be tested in complete isolation from infrastructure. Somewhere along the way, the industry started applying them as defaults. Every new service gets four layers on day one, whether it needs them or not.
The problem isn't abstraction itself. It's an unexamined abstraction. Layers that exist because a blog post said they should, not because your team's actual pain demanded them. And those layers have real costs that show up in production, not in architecture diagrams.
Treat every indirection layer as carrying a recurring cost: onboarding time, debugging hops, and change amplification. If you can't name the cost a layer prevents, it probably isn't preventing one.
Defer layers until pain demands them. A service that starts with two layers and grows a third when complexity justifies it will almost always outperform one that starts with five and never removes the two it didn't need.
Evaluate abstractions by operational impact, not by how satisfying the folder structure looks in an IDE.
📘 New Career Guide
I just finished a major update to the From Developer to Architect career guide. It now includes a self-assessment rubric, a week-by-week 90-day growth plan, architecture artifact templates, and interview prep frameworks. If you're actively working toward a Staff, Tech Lead, or Architect role, this is the structured roadmap.
Free download here: https://www.techarchitectinsights.com/from-developer-to-architect-free-career-guide
🧰 Tool of the week: Abstraction Cost Scorecard
Abstraction Cost Scorecard: Audit whether your layers earn their keep.
Pick one service or module. Walk through each item. Score each dimension 1 (low cost) to 5 (high cost). A total above 21 means your abstractions are likely costing more than they're saving.
Change amplification - Count the files touched for your last five single-concept changes (one field, one validation rule, one endpoint) if the median is above 4 files, score 4 or 5.
Debugging hop count - Trace a recent production error from the log line to the root cause. Count how many layers, mappers, or transformations you had to step through. More than 3 hops for a straightforward error scores high.
Onboarding friction - Ask your most recent team joiner: "How long before you could confidently make a change end-to-end?" If the answer involves memorizing layer conventions before writing useful code, that's a cost.
Mapper proliferation - Count the object-to-object mappers in the module. Each mapper that exists solely to cross a layer boundary (not to reshape data for a genuinely different consumer) is tax.
Test indirection - Review your test suite. Are you writing tests that primarily verify that data passes unchanged through a layer? If more than 20% of your tests are "passthrough verification," score high.
Deployment coupling - Check if your abstraction boundaries actually enable independent deployment or independent testing. If all layers always deploy together and are tested together, the boundary is ceremonial.
Cognitive load per change - Ask the team: "For a routine change, how many architectural concepts do I need to hold in my head simultaneously?" If the answer requires understanding ports, adapters, use cases, domain services, and application services just to add an endpoint, that's a load without leverage.
Scoring:
7-14: Your abstractions are probably pulling their weight. Revisit annually.
15-21: Some layers may be costing more than they prevent. Investigate the highest-scoring items.
22-35: You're paying a significant abstraction tax. Prioritize simplification.
🔍 In practice: The order service that needed a diet
Scenario: An order service built with a full hexagonal architecture. Three developers. One bounded context. No plans to swap the database or change the messaging infrastructure. The team had been living with the structure for 18 months.
Scope: The order service only, not the broader platform.
Context: Team of 3, single Postgres database, RabbitMQ for events, deployed as one unit.
We ran the scorecard during a retro after a sprint where 60% of the story points went to "simple" changes that touched too many files.
Change amplification: Score 5. Median files per single-concept change was 6. Adding a discount field to the order response required touching the domain entity, domain service, application service, response DTO, two mappers, and contract tests.
Mapper proliferation: Score 4. We had 11 mappers. Seven of them mapped objects with identical fields between layers.
Deployment coupling: Score 5. Every layer is deployed together. Always. The "independent" boundaries had never once been independent of anything.
Debugging hop count: Score 3. Not terrible, but stack traces were noisy. Finding the actual failure meant mentally skipping adapter layers that just delegated.
Total: 27. Well into the red.
The tradeoff we accepted: We didn't flatten everything. We kept the domain layer separate from the API layer because the domain validation logic was genuinely complex and worth testing in isolation. But we merged the application service into the domain service, eliminated five of the seven passthrough mappers, and let the API layer reference domain objects directly for reads. Purists would flinch. Our sprint velocity went up roughly 30%.
Result: Median files per change dropped from 6 to 3. New team member onboarding (measured by "time to first confident PR") went from 3 weeks to about 10 days.
✅ Do this / ❌ Avoid this
Do this:
Run the scorecard when a service is older than 6 months, and the team reports that "simple changes feel slow."
Merge layers that always deploy together and have no independent consumers. A ceremony without independence is just a ceremony.
Keep abstractions that protect genuinely complex business logic or enable genuinely separate testing and deployment. Not every layer is taxed.
Avoid this:
Adding layers at project kickoff "because we might need them later." You're pre-paying tax on complexity that may never arrive.
Treating architecture patterns as identity. "We're a hexagonal architecture team" is a warning sign. You're a team that solves problems. Pick the structure that fits the problem.
Removing abstractions without measuring first. The scorecard exists so that simplification is a decision, not a vibe.
🎯 This week's move
Pick one service your team owns. Run the Abstraction Cost Scorecard against it. Write down the total.
Identify the single highest-scoring dimension. Discuss it in your next team sync.
If you score above 21, draft a one-paragraph proposal for the simplest layer you could merge or remove.
By the end of this week, aim to: Have a scorecard result and one specific simplification candidate written down, even if you don't act on it yet.
👋 Wrapping up
Every abstraction is a bet that the flexibility it provides will outweigh the friction it creates.
Most codebases have at least one layer that lost that bet a long time ago. Nobody noticed because the cost is diffuse: a few extra minutes per change, a few extra files per review, a slightly longer ramp for new joiners.
Measure it. Then decide.
Help a friend think like an architect
Know someone making the jump from developer to architect? Forward this email or share your personal link. When they subscribe, you unlock rewards.
🔗 Your referral link: {{rp_refer_url}}
📊 You've referred {{rp_num_referrals}} so far.
Next unlock: {{rp_next_milestone_name}} referrals → {{rp_num_referrals_until_next_milestone}}
View your referral dashboard
P.S. I’m still working on two new rewards. If there’s something you are interested in, let me know 😉
⭐ Good place to start
I just organized all 40 lessons into four learning paths. If you've missed any or want to send a colleague a structured starting point, here's the page.
Thanks for reading.
See you next week,
Bogdan Colța
Tech Architect Insights