Introduction: The Problem Everyone Pretends Is “Just Hard”
Microservices architecture has been marketed for over a decade as the cure for monolith pain: independent deployments, autonomous teams, and infinite scalability. The uncomfortable truth is that most microservices systems fail not because the idea is wrong, but because teams fundamentally misunderstand the relationship between modularity and service granularity. They design services as deployment units first and architectural units second, which is backwards. The result is a distributed monolith with network latency, operational overhead, and debugging nightmares layered on top.
At its core, modularity is about managing complexity through boundaries. This is not a microservices concept; it predates cloud computing by decades. Parnas introduced information hiding in 1972, and principles like high cohesion and low coupling were already well established long before Docker existed. Microservices did not replace these ideas; they merely raised the cost of getting them wrong. When your module boundary is also a network boundary, every mistake becomes slower, more expensive, and harder to reverse.
The gap appears when teams conflate “small services” with “good design”. Service granularity is treated as a sizing problem rather than a semantic one. You end up with services split by CRUD operations, database tables, or UI screens, while the underlying domain logic is scattered across the system. The brutal reality is this: microservices amplify architectural discipline, they do not compensate for its absence. If your modular thinking is weak, microservices will expose it quickly and publicly.
Modularity First: Architecture Is Not a Deployment Diagram
Modularity is about reasoning, not runtime. A well-designed module encapsulates a cohesive set of responsibilities, exposes a stable interface, and hides volatile implementation details. This applies whether the module lives inside a monolith, a package, or a remote service. Eric Evans' Domain-Driven Design makes this explicit with bounded contexts: a boundary where a specific domain model is consistent and meaningful. Bounded contexts are modularity tools, not deployment instructions, and that distinction is routinely ignored.
When teams start by drawing boxes labeled “User Service”, “Order Service”, and “Payment Service”, they are already skipping the hard part. Those labels sound domain-driven, but without deep modeling they are usually just nouns stolen from the database schema. True modularity requires discovering where invariants live, where consistency is required, and where change is frequent. Martin Fowler has repeatedly warned that premature microservices adoption leads to excessive coupling through APIs instead of method calls. The cost is not theoretical; it shows up as cascading failures and versioned endpoints no one dares to delete.
A modular architecture can exist entirely inside a monolith. In fact, many successful systems - Shopify being a well-documented example - deliberately stayed monolithic while enforcing strict internal modularity. The lesson is not “monoliths are better”, but that modularity must be validated independently of service boundaries. If you cannot clearly explain why two components must evolve independently, you have no architectural justification to split them into separate services.
Service Granularity: The Hidden Cost of Being “Too Small”
Service granularity answers a different question: how much functionality should live behind a single network boundary? This is where good intentions often turn into architectural self-harm. Overly fine-grained services increase coordination costs, latency, and operational complexity. Each service adds CI pipelines, observability requirements, deployment risks, and on-call surface area. Amazon's famous “two-pizza teams” are often quoted, but rarely paired with the equally important requirement: strong internal ownership and clear service contracts.
The fallacy is assuming that smaller services automatically improve agility. In reality, small services that are tightly coupled force synchronized deployments through informal channels: Slack messages, undocumented assumptions, and brittle integration tests. The coupling did not disappear; it just moved from code to people and processes. Sam Newman explicitly calls this out in Building Microservices, noting that distributed systems punish chatty communication and unclear ownership far more than monoliths ever did.
A practical rule emerges from real-world systems: service boundaries should align with business capabilities, not technical layers. A service that owns a complete capability can change internally without coordinating with half the organization. A service that owns only part of a workflow becomes a bottleneck disguised as modularity. If your services cannot be reasoned about independently by a senior engineer reading only their API and documentation, they are too granular.
Where Modularity and Granularity Collide (and Usually Break)
The collision happens when a single conceptual module is split across multiple services. This often occurs with shared domain logic such as pricing rules, eligibility checks, or state machines. Teams extract “common” functionality into separate services to avoid duplication, unintentionally creating high-frequency, high-coupling dependencies. The result is worse than duplication: a central service that everyone depends on, evolves slowly, and fails catastrophically.
This anti-pattern is well documented. Fowler describes it as “shared services becoming distributed big balls of mud”. The core issue is misunderstanding modular reuse. Not all reuse should be runtime reuse. Sometimes copy-paste with clear ownership is cheaper and safer than a shared abstraction behind an API. This is deeply uncomfortable for engineers trained to avoid duplication at all costs, but distributed systems change the economics of reuse.
Another breaking point is data ownership. Modularity demands that a module owns its data. Granularity determines whether that ownership crosses process boundaries. When multiple services write to the same database or rely on synchronous calls to maintain consistency, the architecture is already compromised. The CAP theorem is not an excuse here; it is a constraint. If your business requires strong consistency across a boundary, that boundary should probably not be a service boundary.
Practical Strategies to Align Modularity with Service Boundaries
The first strategy is unglamorous: design modular monoliths first. This is not a step backward; it is a controlled environment for validating boundaries. Tools like package-by-feature, hexagonal architecture, or vertical slices force you to confront coupling early. Once a module proves stable, cohesive, and independently evolvable, promoting it to a service becomes a mechanical decision rather than a leap of faith.
Second, use contracts to test modularity, not just services. Consumer-driven contract testing (for example, Pact) is often applied after services exist. A more disciplined approach is to define contracts at the module level before distribution. If a contract is hard to specify without leaking internals, the module is not well-formed. This mirrors what DDD calls “explicit boundaries” and prevents accidental coupling disguised as convenience.
Third, treat service size as an outcome, not a goal. Measure change frequency, deployment independence, and incident blast radius. If two services always change together, deploy together, and fail together, they are lying about being separate. Merge them. Brutal honesty here saves years of maintenance cost.
Code Example: Modular Boundary Before a Service Boundary
Below is a simplified TypeScript example illustrating a domain module that is internally cohesive and externally explicit. This is service-ready without being a service yet.
// pricing/PricingPolicy.ts
export interface PricingPolicy {
calculatePrice(input: PricingInput): Money;
}
// pricing/StandardPricingPolicy.ts
export class StandardPricingPolicy implements PricingPolicy {
calculatePrice(input: PricingInput): Money {
if (input.isVip) {
return input.basePrice.multiply(0.9);
}
return input.basePrice;
}
}
// pricing/PricingService.ts
export class PricingService {
constructor(private policy: PricingPolicy) {}
price(input: PricingInput): Money {
return this.policy.calculatePrice(input);
}
}
If this module is hard to extract into a service later, the problem is not microservices—it is the module design.
The 80/20 Reality: What Actually Delivers Most of the Value
Roughly 80% of microservices pain comes from 20% of bad decisions. The biggest offender is premature distribution. Teams optimize for hypothetical scale instead of current complexity. The second is ignoring domain boundaries in favor of organizational charts or UI components. The third is treating APIs as implementation details instead of long-lived contracts.
Conversely, the highest leverage actions are surprisingly few. Invest deeply in domain modeling. Enforce module ownership and data ownership. Delay service extraction until independence is proven, not assumed. These practices are boring, slow, and deeply effective. They do not trend on social media, but they consistently show up in postmortems as the things teams wish they had done earlier.
Key Takeaways: Five Actions That Actually Work
- Start with modularity inside a monolith and prove boundaries before distributing them.
- Align services with business capabilities, not entities, tables, or UI screens.
- Avoid shared domain services; prefer clear ownership even if it means duplication.
- Let deployment independence, not ideology, justify service boundaries.
- If two services change together, merge them without guilt.
Conclusion: Microservices Don't Fix Architecture, They Expose It
Aligning modularity and service granularity is not a tooling problem, a cloud problem, or a scaling problem. It is an architectural discipline problem. Microservices are brutally honest: they turn every unclear boundary into latency, every leaky abstraction into outages, and every organizational shortcut into technical debt with interest. The architecture you ship is the architecture you believe in, whether you admit it or not.
The uncomfortable but empowering truth is that most teams already know how to fix this. The principles are old, well-documented, and repeatedly validated in production systems. What is required is restraint, patience, and the willingness to delay “cool” solutions until the fundamentals are solid. Do that, and microservices become a force multiplier. Skip it, and no amount of Kubernetes YAML will save you.