Introduction: Why BFF + Monorepos Is Both Powerful and Dangerous
The Backend for Frontend (BFF) pattern sounds deceptively simple: instead of forcing all clients to consume the same generic backend APIs, you create a backend tailored to each frontend's needs. Web, mobile, TV apps—each gets exactly the data shape, latency profile, and orchestration logic it requires. This idea was popularized by Netflix engineers and later documented by Sam Newman and others as part of the microservices conversation. In isolation, BFF is a pragmatic response to frontend diversity. In a monorepo, however, it becomes a loaded architectural decision.
Monorepos promise shared tooling, unified dependency management, and easier refactoring across boundaries. Combine that with BFF, and suddenly frontend and backend teams can iterate together at high speed. That's the upside. The downside is that you are one careless abstraction away from recreating a distributed monolith—just neatly organized in folders. Brutal honesty: most teams don't fail at BFF because the pattern is flawed; they fail because they underestimate how quickly boundaries erode when everything lives in the same repository.
This article assumes you already understand what a monorepo is and why teams adopt it (Google, Meta, and Microsoft have documented this extensively). The focus here is practical: how to implement BFFs in a monorepo without turning your architecture into a ball of mud, while staying aligned with real-world constraints like CI/CD speed, team ownership, and long-term maintainability.
What Backend for Frontend Really Means (And What It Doesn't)
At its core, a BFF is an application-layer backend whose primary responsibility is to serve a specific frontend. It aggregates data from downstream services, applies presentation-oriented transformations, and encapsulates frontend-specific logic. According to Sam Newman's Building Microservices (O'Reilly), the BFF should not be a dumping ground for business logic. That distinction matters more in a monorepo than anywhere else, because code proximity creates temptation.
A common misconception is that BFFs are just “thin proxies.” In practice, they are orchestration layers. They decide which services to call, in which order, how to handle partial failures, and how to shape responses for performance. Netflix has publicly shared that their BFFs exist to reduce chattiness and over-fetching for clients, not to replace core domain services. That principle still holds, regardless of repo structure.
What a BFF should not be is a shortcut to bypass domain boundaries. If your BFF starts writing directly to multiple databases or re-implementing business rules “just for convenience,” you are breaking the contract. In a monorepo, this usually happens because the code is right there, imports are easy, and reviews are rushed. The pattern only works if you enforce architectural discipline that is stronger than the convenience of shared code.
Monorepos Change the Rules: The Hidden Trade-offs
Monorepos remove friction. That's the point. One repo, one dependency graph, one set of build tools. But friction is sometimes what prevents architectural decay. When your BFF and domain services live side by side, the cost of crossing layers drops to nearly zero. This is where many teams fool themselves into thinking they are “moving faster,” while actually accumulating long-term coupling.
Research from Google's monorepo practices shows that scale works only because of strict ownership, tooling, and automated enforcement. Without that, monorepos amplify bad decisions. In the context of BFFs, the biggest risk is accidental reuse. Developers start importing domain service internals directly into the BFF because “it's faster than calling an API.” At that moment, your BFF stops being a boundary and becomes a leaky abstraction.
Another under-discussed trade-off is deployment independence. BFFs are often deployed alongside their frontend clients. In a monorepo, teams sometimes align release cycles too tightly, eliminating the ability to evolve independently. That might feel efficient early on, but it reduces resilience. Martin Fowler has repeatedly emphasized that independent deployability is a core property of effective service architectures. A monorepo does not remove that requirement—it just makes it easier to violate.
Structuring BFFs in a Monorepo: Proven Layouts That Scale
A monorepo with BFFs should make boundaries obvious, not implicit. The repository structure is your first line of defense. A proven approach is to treat each BFF as a first-class application, not a library or a folder under “backend.”
/apps
/web-bff
/mobile-bff
/admin-bff
/services
/user-service
/payment-service
/catalog-service
/packages
/shared-types
/shared-logging
/shared-auth
This layout mirrors how teams think about ownership and deployment. Each BFF is deployable on its own, has its own API surface, and communicates with services through explicit contracts—often HTTP or gRPC, even if everything lives in the same repo. Yes, that feels redundant. It's also intentional. Architectural integrity is more important than shaving milliseconds off local calls.
Shared packages deserve special caution. Shared types and utilities are fine, but shared “business helpers” are a red flag. Netflix and Uber have both documented how overgrown shared libraries become de facto coupling mechanisms. In BFFs, shared code should be boring: logging, tracing, auth primitives. Anything that encodes domain behavior belongs in a service, not in a shared package that every BFF imports.
API Contracts and Versioning: Don't Let Proximity Kill Discipline
One of the strongest arguments for BFFs is that they decouple frontend evolution from backend constraints. Ironically, monorepos make it easier to skip versioning entirely. Developers change a service response and update the BFF in the same commit. Everything builds, tests pass, and the change ships. Short term win, long term loss.
API contracts should still be explicit. OpenAPI or GraphQL schemas are not optional documentation; they are enforcement tools. Even if your BFF and service are in the same repo, treat them as if they were owned by different teams. Consumer-driven contract testing, such as Pact, exists precisely to prevent silent breakage. This is not theoretical—teams at Spotify and ThoughtWorks have published case studies showing that contract testing reduces integration failures dramatically.
Versioning doesn't have to be heavyweight. Semantic versioning at the API level, combined with deprecation windows, is usually enough. The key is psychological: developers must feel that changing a service contract has a cost. Monorepos remove natural friction; contracts put some of it back where it belongs.
Implementation Example: A Lean BFF in TypeScript
Below is a simplified example of a web BFF aggregating user and order data. Notice that it calls services over HTTP, even though they live in the same repo.
// apps/web-bff/src/routes/userDashboard.ts
import express from "express";
import { fetchUser } from "../clients/userService";
import { fetchOrders } from "../clients/orderService";
const router = express.Router();
router.get("/dashboard/:userId", async (req, res) => {
const { userId } = req.params;
const [user, orders] = await Promise.all([
fetchUser(userId),
fetchOrders(userId),
]);
res.json({
id: user.id,
name: user.name,
recentOrders: orders.slice(0, 5),
});
});
export default router;
This looks almost boring—and that's the point. The BFF orchestrates, shapes, and optimizes data for the frontend. It does not calculate discounts, enforce business rules, or persist state. Those responsibilities stay in the services. If you feel tempted to “just reuse” service internals here, that's a smell worth paying attention to.
CI/CD and Ownership: Where Most Teams Quietly Fail
Architecture lives or dies in the pipeline. In monorepos, CI/CD must reinforce boundaries, not flatten them. A common anti-pattern is running all tests and deploying everything on every change. That approach doesn't scale and pushes teams to blur responsibilities. Instead, use affected-based builds (as documented by Nx and Bazel) so only changed BFFs and services are tested and deployed.
Ownership is equally critical. Each BFF should have a clearly defined owning team. Codeowners files are not bureaucracy; they are guardrails. When everyone owns everything, nobody owns the architecture. Google's monorepo success stories repeatedly highlight ownership and automated enforcement as non-negotiable pillars.
Brutally honest take: if your organization is not willing to invest in tooling and ownership discipline, a monorepo with BFFs will hurt more than it helps. The pattern assumes maturity. Without it, you'll move fast straight into a maintenance wall.
The 80/20 Rule: The Few Decisions That Matter Most
Roughly 80% of BFF success in monorepos comes from a small set of decisions. First, enforce communication through APIs, even internally. Second, keep BFFs thin and presentation-focused. Third, limit shared code aggressively. These are not nice-to-haves; they are the difference between a scalable system and a tangled one.
The remaining 20%—tooling choices, framework debates, folder naming—gets disproportionate attention but delivers diminishing returns. Teams often bikeshed over whether to use REST or GraphQL while ignoring the fact that their BFFs are rewriting domain logic. Focus on the leverage points. Architecture is less about cleverness and more about restraint.
Conclusion: BFFs in Monorepos Are a Test of Discipline
Backend for Frontend in a monorepo is not a silver bullet. It's a force multiplier—for good or for bad. When done well, it enables frontend teams to move fast without dragging backend services into constant compromise. When done poorly, it creates tightly coupled systems that only look clean on the surface.
The uncomfortable truth is that the pattern exposes organizational weaknesses more than technical ones. If your teams respect boundaries, invest in contracts, and treat the monorepo as an optimization—not an excuse—BFFs can thrive. If not, the architecture will rot quietly until change becomes expensive.
There's no magic here. Just clear boundaries, boring discipline, and the willingness to say “no” to convenience. That's what scales.