Lean Six Sigma for Software Architecture: Preventing Complexity as a DefectUse CTQs, root-cause analysis, and design controls to keep architectures evolvable and low-waste.

Introduction

Software architecture is, at its core, an exercise in managing complexity. Systems begin with clear intentions: clean module boundaries, well-defined interfaces, and a shared understanding of design decisions. Then time passes. Teams grow and shrink, requirements shift, and shortcuts accumulate. What was once a legible structure becomes a labyrinth of tangled dependencies, implicit coupling, and unexplained conventions. Complexity, in most long-lived systems, is not introduced deliberately—it accretes.

The engineering discipline has no shortage of vocabulary for this: technical debt, design drift, architectural erosion. What it often lacks is a systematic process for preventing these conditions before they compound. This is where an unexpected source of insight becomes relevant: Lean Six Sigma (LSS), a management and engineering methodology born in manufacturing. While LSS is rarely discussed alongside software architecture, its core ideas—defining quality in measurable terms, tracing defects to their root causes, and implementing process controls—translate with surprising precision to the challenges of architectural governance.

This article explores how software architects and technical leaders can borrow selectively from Lean Six Sigma to treat complexity not as an inevitable entropy but as a category of defect. We will examine how to define architectural quality using Critical-to-Quality characteristics (CTQs), apply root-cause analysis to identify why complexity accumulates, and implement lightweight design controls that give teams durable guard rails without suffocating velocity.

The Problem: Complexity as an Unmanaged Defect

In manufacturing, a defect is any output that deviates from a defined quality specification. The definition is unambiguous because the specification exists. Software architecture struggles with a more insidious version of the same problem: the specification is either absent, ambiguous, or expressed only in the minds of a few senior engineers who may no longer be on the team.

When there is no explicit quality standard for an architecture, complexity cannot be recognized as a defect. It is experienced as friction—slow feature delivery, brittle deployments, high onboarding costs—but its root causes are rarely traced back to specific structural decisions. Teams respond to symptoms rather than causes: they add more documentation, more meetings, more reviewers. None of these address the underlying structural issue, which is that the architecture has drifted away from whatever implicit standard existed at the start.

Lean Six Sigma defines this class of problem with precision. In LSS terms, the system has no Voice of the Customer (VoC) translation into measurable quality attributes, no process capability baseline, and no control plan to detect drift. The manufacturing analogy is a production line with no tolerance specifications and no statistical process control charts. Without those instruments, you cannot distinguish normal variation from a genuine quality failure until the product is already in the hands of a customer who rejects it. In software, the customer equivalent is a development team trying to implement a feature that should take two days and taking two weeks because the architecture has become opaque.

This framing is not merely rhetorical. It leads to a concrete program: if complexity is a defect, then preventing it requires the same steps as preventing any manufacturing defect—define what "good" looks like, measure deviations, find their causes, implement controls, and sustain them.

Lean Six Sigma Fundamentals for Software Engineers

Before applying LSS concepts, it is worth establishing what they actually mean, stripped of the manufacturing context where they originated.

Six Sigma is a quality framework that focuses on reducing variation and defects in a process. Its central methodology is DMAIC: Define, Measure, Analyze, Improve, Control. Each phase has specific deliverables. In the Define phase, teams identify the problem and the customer requirements that constitute quality. In Measure, they quantify the current state. In Analyze, they find root causes. In Improve, they implement changes. In Control, they put mechanisms in place to prevent regression. The framework was developed at Motorola and popularized by General Electric in the 1980s and 1990s, and it has been applied across industries from healthcare to logistics.

Lean is a complementary methodology focused on eliminating waste—activities that consume resources without producing value. Lean originated in the Toyota Production System and was formalized by Womack and Jones in Lean Thinking (1996). In manufacturing, waste takes forms like excess inventory, unnecessary motion, or waiting time. In software development, waste manifests as unnecessary handoffs, rework caused by unclear requirements, overengineering, and—crucially—the cognitive overhead imposed by a complex architecture that forces developers to understand more of the system than necessary to make a change.

Critical-to-Quality (CTQ) is a Lean Six Sigma concept that bridges customer requirements and measurable process parameters. A CTQ is a specific, quantifiable characteristic whose variation directly affects the customer's perception of quality. In manufacturing, a CTQ might be the thickness of a metal sheet with a tolerance of ±0.02mm. The key property of a CTQ is that it is measurable, which means it can serve as the basis for a control plan.

Root-cause analysis in LSS most commonly uses the Ishikawa (fishbone) diagram and the Five Whys technique. Both are tools for moving from symptoms to causes systematically, rather than jumping to solutions. Design controls are the LSS Control phase deliverables: documented standards, automated checks, monitoring dashboards, and process gates that prevent a process from drifting back to its previous state after an improvement.

Each of these concepts has a direct analog in software architecture work.

Defining Architectural Quality as CTQs

The most important—and most neglected—step in architectural governance is defining what "quality" means in operational terms. Architects write Architectural Decision Records (ADRs), draw component diagrams, and document design principles, but these artifacts rarely specify measurable tolerances. A principle like "prefer loose coupling" is directional but not actionable as a quality gate. A CTQ, by contrast, specifies a measurable characteristic and an acceptable range.

To define architectural CTQs, start with the Voice of the Customer translated to the architectural context. The "customers" of a software architecture are the development teams who work within it, the operations teams who deploy and monitor it, and the business stakeholders who depend on its ability to absorb change. Their concerns map to well-known software quality attributes: modifiability, deployability, testability, performance, and security. These are the categories; CTQs make them specific.

Consider modifiability. A vague quality attribute might be stated as "the system should be easy to change." A CTQ operationalizes this: "No single feature change should require modifications to more than three packages." Or: "The average cyclomatic complexity per function in core domain modules should not exceed 10." These formulations are measurable. They can be computed automatically, tracked over time, and used as gates in a CI pipeline. They give a team the equivalent of a process control chart: a baseline, a current value, and a trigger for investigation when the value exceeds the threshold.

For a microservices architecture, relevant CTQs might include fan-in and fan-out metrics for individual services, the number of services that must be deployed to release a single business capability, or the percentage of services that can be tested in isolation without requiring a running dependency. For a layered monolith, CTQs might track the number of layer violations detected by dependency analysis tools, the ratio of domain code to infrastructure code, or the average depth of inheritance hierarchies in core domain classes.

The process of defining CTQs is itself valuable, independent of whether the metrics are ever automated. It forces a team to have an explicit conversation about what they are optimizing for, and it produces a shared vocabulary for evaluating trade-offs. When a developer proposes introducing a new cross-cutting dependency, the question is no longer "does this feel right?" but "does this violate our CTQ for fan-out?"

// Example: Automated CTQ check in a CI pipeline
// CTQ: No module in /src/domain should import from /src/infrastructure

import * as path from "path";
import * as fs from "fs";
import { parse } from "@typescript-eslint/typescript-estree";

interface CTQViolation {
  file: string;
  importPath: string;
  line: number;
}

function checkDomainIsolationCTQ(domainDir: string): CTQViolation[] {
  const violations: CTQViolation[] = [];
  const files = walkDirectory(domainDir).filter((f) => f.endsWith(".ts"));

  for (const file of files) {
    const source = fs.readFileSync(file, "utf-8");
    const ast = parse(source, { loc: true });

    for (const node of ast.body) {
      if (
        node.type === "ImportDeclaration" &&
        node.source.value.toString().includes("/infrastructure")
      ) {
        violations.push({
          file,
          importPath: node.source.value.toString(),
          line: node.loc?.start.line ?? 0,
        });
      }
    }
  }

  return violations;
}

function walkDirectory(dir: string): string[] {
  const results: string[] = [];
  for (const entry of fs.readdirSync(dir, { withFileTypes: true })) {
    const fullPath = path.join(dir, entry.name);
    if (entry.isDirectory()) results.push(...walkDirectory(fullPath));
    else results.push(fullPath);
  }
  return results;
}

// In CI: fail the build if CTQ is violated
const violations = checkDomainIsolationCTQ("./src/domain");
if (violations.length > 0) {
  console.error("CTQ VIOLATION: Domain isolation boundary breached");
  violations.forEach((v) =>
    console.error(`  ${v.file}:${v.line} imports ${v.importPath}`)
  );
  process.exit(1);
}

Applying Root-Cause Analysis to Architectural Complexity

Identifying that complexity exists is not the same as understanding why it exists. Most discussions of technical debt stop at naming the problem ("this service is a monolith," "these modules are too coupled") without investigating the causal chain that produced the condition. Root-cause analysis provides a structured approach to that investigation.

The Five Whys technique, despite its simplicity, is effective for architectural problems because it forces teams to move past proximate causes to systemic ones. Consider an observation: "Adding a new payment method requires changes to twelve files across four modules." Why? Because the payment processing logic is duplicated across modules. Why is it duplicated? Because when each module was built, the shared abstraction did not exist yet. Why did the shared abstraction not exist? Because new modules were built by copying from existing ones rather than refactoring the common logic into a shared library. Why was copying preferred over refactoring? Because the team had no time allocated for cross-cutting refactoring work, and copying was faster in the short term. Why was no time allocated? Because the sprint planning process did not treat architectural maintenance as a first-class work type.

This chain leads to a root cause that is not technical but organizational: a planning process that systematically underinvests in architectural maintenance. The technical symptoms (duplication, coupling) are downstream effects. Addressing the root cause requires a process change, not just a refactoring ticket. Without this insight, a team might spend weeks consolidating the payment logic—and then reproduce the same pattern in the next feature because the underlying process that generates duplication has not changed.

The Ishikawa diagram is useful when root causes are multidimensional. Architectural complexity typically has causes in at least four categories: process (how work is planned and reviewed), people (skills, turnover, communication patterns), tools (absence of static analysis, inadequate test coverage), and architecture itself (structural choices that make drift easier). A fishbone diagram drawn with these axes makes visible the relative contribution of each category and helps prioritize interventions.

Root-cause analysis is most valuable when conducted as a team activity, not as a solo architectural assessment. Developers who work within the architecture daily have observations that an architect reviewing diagrams will miss. A structured session using Five Whys or a fishbone diagram, with a facilitator who ensures the group moves from symptoms to causes rather than jumping to solutions, typically surfaces insights that pure code analysis cannot.

Implementing Lightweight Design Controls

The Control phase of DMAIC is where most architectural governance programs fail. Teams perform an architectural review, identify problems, execute a refactoring effort, and then watch the same problems re-emerge within six months. The failure is not in the improvement work; it is in the absence of controls that would prevent regression.

In manufacturing, a control plan specifies what to measure, how often, what the acceptable range is, and what action to take when the range is exceeded. The software equivalent is a combination of automated checks, process gates, and periodic review rituals. The key word is lightweight: controls that impose significant overhead will be bypassed under pressure. The goal is to make the right thing easy and the wrong thing visible.

Automated checks are the most reliable controls because they run without human discipline. Dependency analysis tools like dependency-cruiser (JavaScript/TypeScript), ArchUnit (Java), or import-linter (Python) can enforce architectural rules as part of a CI pipeline. These tools allow teams to declare structural constraints—"module A must not import from module B," "all classes in the domain layer must not reference infrastructure packages"—and fail the build when a violation is introduced. This converts the CTQs defined earlier into enforceable gates.

# Example: Python import-linter configuration for architectural boundary control
# .importlinter configuration file

[importlinter]
root_package = myapp

[importlinter:contract:domain-isolation]
name = Domain layer must not import from infrastructure
type = forbidden
source_modules =
    myapp.domain
forbidden_modules =
    myapp.infrastructure
    myapp.adapters

[importlinter:contract:application-layering]
name = Application layer can only depend on domain
type = layers
layers =
    myapp.adapters
    myapp.application
    myapp.domain

Process gates are controls embedded in workflows rather than automated tools. Architecture Decision Records (ADRs) are a well-established form of process gate: before a significant design decision is implemented, it must be documented and reviewed. The value of an ADR is not only the documentation it produces but the forcing function it creates. A developer who must write an ADR before introducing a new dependency is more likely to consider alternatives than one who can simply add an import statement.

Pull request review checklists are another form of process gate. A checklist that explicitly asks "Does this change introduce a new dependency between layers that did not previously exist?" or "Does this change require modifying more than N files?" makes structural considerations visible at the moment when they can still be reversed. The checklist does not require an architect to review every pull request; it distributes architectural awareness to the team.

Periodic control reviews—analogous to LSS control charts reviewed on a cadence—provide the systemic view that individual PRs cannot. A monthly or quarterly session where the team reviews trend data on architectural metrics (dependency count, test coverage by layer, build time, deployment frequency) allows early detection of drift before it becomes entrenched. These sessions are most effective when the metrics are automated and visualized, so the conversation is grounded in data rather than impressions.

A Worked Example: Applying DMAIC to a Layered Monolith

To make these concepts concrete, consider a team maintaining a layered monolith (presentation, application, domain, infrastructure) that has developed a well-recognized problem: feature delivery time has been increasing quarter over quarter, and developers frequently report that they cannot make changes in one module without inadvertently breaking behavior in another. This is the LSS "Define" phase input: a documented, measurable customer complaint.

Define: The team frames the problem as "architectural coupling is increasing delivery friction." They identify the CTQs: (1) average number of files modified per feature, measured from commit history; (2) number of layer violations detected by dependency-cruiser; (3) percentage of unit tests that require more than one database connection. Current baselines are established: 18 files per feature, 47 layer violations, 34% of tests requiring database state.

Measure: The team runs dependency-cruiser across the codebase and produces a dependency graph. They also analyze six months of commit history to understand which modules co-change most frequently. They find that the application layer directly imports from the infrastructure layer in 31 locations—a structural violation of the layering contract—and that the domain layer contains persistence annotations (JPA/Hibernate in this case), mixing persistence concerns with business logic.

Analyze: A Five Whys session reveals that the infrastructure imports in the application layer began when a developer needed access to a configuration object that was only instantiated in the infrastructure layer. The root cause was that the configuration abstraction did not exist in the domain layer, so the developer reached across layer boundaries for expediency. This pattern was replicated by subsequent developers who saw the existing violation as precedent. The Ishikawa diagram adds: no CI check existed to catch layer violations, and no ADR process required documenting cross-layer dependencies.

Improve: The team implements three targeted changes: (1) they introduce a Configuration interface in the domain layer and an infrastructure implementation that satisfies it, eliminating the direct imports; (2) they move persistence annotations to separate repository classes in the infrastructure layer, restoring the domain layer to pure business logic; (3) they add dependency-cruiser to the CI pipeline with rules that prohibit application-to-infrastructure imports.

Control: The team adds layer violation count to their weekly engineering metrics dashboard. They adopt an ADR template with a required section: "Architectural impact: does this decision introduce or remove a cross-layer dependency?" They schedule a quarterly architecture health review where the CTQ metrics are reviewed as a team. Six months after the improvement, layer violations have dropped to 4 (all with documented ADRs explaining the exception), and average files per feature has fallen from 18 to 11.

This sequence is not dramatic. It does not require a complete rewrite or a dedicated architecture team. It requires a disciplined process applied to a bounded problem. That is exactly what DMAIC is designed to produce.

Trade-offs and Pitfalls

Applying LSS to software architecture produces genuine benefits, but it also carries risks that practitioners should understand before committing to the approach.

The most significant risk is over-formalization. Lean Six Sigma was designed for high-volume, repeatable manufacturing processes. Software development is neither fully repeatable nor high-volume in the same sense. If a team applies LSS with the same rigor used in a semiconductor fabrication plant—complete with formal measurement system analysis, process capability indices, and statistical hypothesis testing—they will spend more time on the quality process than on the product. The appropriate level of formalism for most software teams is much lighter: define a small number of CTQs that matter, automate the measurements you can automate, and use the qualitative LSS tools (Five Whys, fishbone diagrams) as facilitation techniques rather than formal deliverables.

A related pitfall is metric fixation. CTQs are proxies for quality, not quality itself. A team that hits its dependency violation target by marking all violations as documented exceptions has technically satisfied the metric while defeating its purpose. This is Goodhart's Law in architectural governance: when a measure becomes a target, it ceases to be a good measure. Controls need periodic recalibration, and teams need to maintain the judgment to question whether the metrics still reflect what they care about.

Organizational resistance is a practical challenge that LSS practitioners in manufacturing are familiar with and software teams often underestimate. Introducing structured root-cause analysis and explicit quality gates changes how engineering work is governed, and some developers will experience this as bureaucracy or lack of trust. The framing matters: LSS tools work best when they are introduced as instruments of collective problem-solving, not as compliance mechanisms imposed by an architecture team. Developers who participate in defining CTQs and designing controls are far more likely to respect them than those who receive them as mandates.

Finally, tool availability is a genuine constraint. The automated dependency analysis tools mentioned in this article vary significantly in maturity and coverage across ecosystems. Java and the JVM ecosystem have excellent tooling (ArchUnit, JDepend, SonarQube). TypeScript has dependency-cruiser and eslint-plugin-import. Python has import-linter and pylint. Languages with weaker static analysis ecosystems—some dynamic languages, some newer ecosystems—may require custom tooling or may make certain CTQs impractical to automate. Teams should choose CTQs that they can actually measure, even if that means starting with a smaller, more automatable set.

Best Practices for Implementation

Start with a bounded scope. Applying DMAIC to an entire large codebase in one effort is likely to produce an exhausting analysis that goes nowhere. Instead, identify one architectural concern that is actively causing team pain—a module with high change frequency and high defect rate, a service with excessive fan-in, a layer boundary that is consistently violated—and apply the full DMAIC cycle to that scope. A successful, bounded improvement builds the team's confidence in the methodology and produces a reusable playbook.

Define CTQs collaboratively, not in isolation. The architectural quality attributes that matter most are often not the ones that architects instinctively prioritize. A team that frequently works with a codebase may find that testability is far more impactful on their daily experience than the coupling metrics an architect would choose. Conducting a Voice of the Customer exercise with the development team—asking them where they lose the most time, what changes feel risky, what parts of the codebase they avoid—produces CTQs that are grounded in lived experience rather than theoretical ideals.

Automate what you can automate. Human review is expensive, inconsistent under time pressure, and does not scale. Every CTQ that can be expressed as an automated check in a CI pipeline is a control that will be enforced reliably regardless of team size, time pressure, or turnover. The investment in writing and maintaining these checks pays compounding returns as long as the architecture is maintained.

Document exceptions explicitly, and treat them as data. No architectural rule is universally correct, and there will be legitimate reasons to violate a CTQ in specific cases. An ADR that documents the exception, explains the trade-off, and specifies a path to resolution (or an acknowledgment that the exception is permanent) converts a silent violation into a conscious architectural decision. Over time, exception patterns reveal where the CTQs need refinement.

Treat architectural health as a first-class work type in planning. Root-cause analysis frequently reveals that architectural complexity accumulates because maintenance work is not protected in the planning process. If every sprint is entirely consumed by feature work, architectural controls will erode. Most engineering planning frameworks—Scrum, Shape Up, dual-track development—can accommodate a protected allocation for architectural maintenance work. The key is making the case for this allocation in the same data-driven language that business stakeholders understand: architectural debt increases delivery time, which reduces feature throughput, which has a measurable cost.

Analogies and Mental Models

The most useful mental model for this entire approach is the analogy to structural engineering. A civil engineer designing a bridge does not simply build and hope. They specify load tolerances, material properties, and safety margins. They perform inspections on a defined schedule. When measurements indicate that a component is approaching its tolerance limit, they intervene before failure. The bridge does not degrade silently; it is monitored against explicit specifications.

Software architecture has traditionally lacked this discipline. The LSS approach is an attempt to introduce it: specify the tolerances (CTQs), measure against them continuously (automated checks), investigate deviations (root-cause analysis), and control for drift (design controls and process gates). The analogy is imperfect—software is more malleable than steel—but the underlying discipline of explicit specifications and continuous measurement translates directly.

A second useful analogy is debt management. Technical debt is a well-understood metaphor in software. What LSS adds is the equivalent of a debt management strategy: not just acknowledging that debt exists, but identifying its causes (root-cause analysis), setting a limit on how much new debt is acceptable (CTQ thresholds), and building a repayment schedule into the operating process (protected maintenance allocation). Without this strategy, debt acknowledgment leads to paralysis rather than action.

80/20 Insight

If this entire framework feels overwhelming, three concepts produce most of the value.

First, define one or two measurable CTQs for the architectural concerns that are currently causing the most pain. Most of the benefit comes from having any explicit quality standard, not from having a comprehensive one. A single CTQ for layer violations, automated in CI, will prevent an enormous amount of drift.

Second, run a single Five Whys session on the most painful architectural problem the team has. The insights from this session will almost always reveal a process or organizational root cause that a technical fix alone cannot address. Addressing that root cause—even partially—produces more lasting improvement than a pure refactoring effort.

Third, add one automated architectural check to the CI pipeline. It does not matter which one. The act of encoding an architectural rule as an automated check changes the team's relationship to that rule. It becomes enforced rather than advisory, which is the difference between a guideline and a control.

Key Takeaways

Five practical steps readers can apply immediately:

  1. Map your team's pain to quality attributes. Ask developers where they lose the most time and translate those answers into quality attributes (modifiability, testability, deployability). Use these as the starting point for CTQ definition.
  2. Select two measurable CTQs and establish baselines. Choose metrics that can be computed automatically from your codebase today. Run them, record the baseline, and decide what threshold would trigger action.
  3. Run a Five Whys session on your most painful architectural problem. Facilitate a 60-minute session with three to five developers who work in the affected area. Document the causal chain and identify the root cause category (process, people, tools, or structure).
  4. Add one architectural constraint to your CI pipeline. Use the dependency analysis tool appropriate for your ecosystem. Start with one rule that encodes a boundary you care about. Iterate from there.
  5. Allocate protected time for architectural maintenance in the next planning cycle. Frame the allocation in delivery terms: "reducing layer violations by 50% is expected to reduce average change time for domain features by 30%." Use data from your CTQ baselines to support the estimate.

Conclusion

Lean Six Sigma did not emerge from software engineering, but it addresses a class of problem that software engineering has long struggled with: the gradual, often invisible degradation of quality in the absence of explicit standards and continuous measurement. Architectural complexity is not destiny. It is the accumulated result of decisions made without quality specifications, in processes that do not protect maintenance work, and in the absence of controls that would have made the drift visible while it was still reversible.

The translation of LSS concepts to software architecture is not a matter of borrowing jargon. It is a matter of adopting the underlying discipline: define quality in measurable terms, measure it continuously, trace deviations to their causes, improve the process that generates defects, and control for regression. Applied with appropriate lightness—not the full formalism of a Six Sigma Black Belt program, but the core discipline of CTQs, root-cause analysis, and design controls—this approach can make architectural governance both more rigorous and more accessible to the engineering teams who must practice it daily.

The most important shift is conceptual: treating complexity as a defect rather than as an inevitable condition. Once a team accepts that framing, the question changes from "how do we live with this complexity?" to "what process change will prevent this class of complexity from recurring?" That is precisely the question that Lean Six Sigma is designed to answer.

References

  1. Womack, J. P., & Jones, D. T. (1996). Lean Thinking: Banish Waste and Create Wealth in Your Corporation. Simon & Schuster.
  2. Harry, M., & Schroeder, R. (2000). Six Sigma: The Breakthrough Management Strategy Revolutionizing the World's Top Corporations. Doubleday.
  3. Pande, P. S., Neuman, R. P., & Cavanagh, R. R. (2000). The Six Sigma Way: How GE, Motorola, and Other Top Companies Are Honing Their Performance. McGraw-Hill.
  4. Ford, N. (2017). Building Evolutionary Architectures. O'Reilly Media. (Ford, R., Parsons, R., & Kua, P.)
  5. Richards, M., & Ford, N. (2020). Fundamentals of Software Architecture. O'Reilly Media.
  6. Nygard, M. (2018). Release It! Design and Deploy Production-Ready Software (2nd ed.). Pragmatic Bookshelf.
  7. Hohpe, G., & Woolf, B. (2003). Enterprise Integration Patterns. Addison-Wesley.
  8. ISO 25010:2011 – Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — System and software quality models. International Organization for Standardization.
  9. dependency-cruiser – Validate and visualize dependencies in JavaScript and TypeScript projects. https://github.com/sverweij/dependency-cruiser
  10. ArchUnit – A Java architecture testing library for specifying and asserting architecture constraints. https://www.archunit.org/
  11. import-linter – Enforce architectural rules on Python imports. https://import-linter.readthedocs.io/
  12. Rozanski, N., & Woods, E. (2011). Software Systems Architecture: Working with Stakeholders Using Viewpoints and Perspectives (2nd ed.). Addison-Wesley.
  13. Fowler, M. (2019). Refactoring: Improving the Design of Existing Code (2nd ed.). Addison-Wesley.
  14. Khononov, V. (2021). Learning Domain-Driven Design. O'Reilly Media.
  15. Forsgren, N., Humble, J., & Kim, G. (2018). Accelerate: The Science of Lean Software and DevOps. IT Revolution Press.