Cross-Functional Collaboration Tools for Developers: What Actually Works in 2026A curated breakdown of tools that enhance engineering collaboration

Introduction

Software development has never been a solo act, but the way teams collaborate has changed faster than most organisations have adapted. In 2026, engineering teams routinely span multiple time zones, report into product and design functions simultaneously, and operate across codebases that integrate external APIs, internal platforms, and AI-assisted workflows. The friction that emerges from this complexity is not primarily technical — it is organisational and communicative.

The tools that address this friction have proliferated wildly. Every category — documentation, incident response, design handoff, API management, observability — now has a dozen contenders, many with overlapping feature sets. The result is tool sprawl: teams end up with seven communication channels, three documentation platforms, and no shared understanding of which one to use for what. The goal of this article is to cut through that sprawl and examine what actually functions in practice, for what kinds of workflows, and at what cost.

This is not a comprehensive product roundup. It is a structured engineering perspective on collaboration tools organised by the problem they solve, the integration cost they carry, and the organisational preconditions required for them to work. If you are a senior engineer, a technical lead, or an engineering manager trying to make a principled choice about your team's toolchain, this is written for you.

The Real Problem: Coordination Overhead, Not Tool Count

Before evaluating any tool, it is worth naming the underlying problem clearly. The bottleneck in cross-functional software development is almost never raw technical capability. Most modern teams have sufficient compute, capable developers, and mature frameworks. What slows them down is coordination overhead — the time and cognitive load required to synchronise state across people, roles, and systems.

Coordination overhead appears in recognisable forms: a product manager who does not know the current state of a feature branch; a designer whose Figma specs diverge from the React component library after three sprint reviews; an on-call engineer who lacks the context to diagnose an incident because runbooks live in five different places. These are not failures of individual discipline — they are structural failures caused by misaligned information systems.

The tools that genuinely help share a common property: they reduce the number of places a person must look to understand the current state of something that matters to their work. Tools that add features without reducing this lookup cost — regardless of how sophisticated their UX is — typically increase overhead over time rather than decreasing it. This framing should guide every purchasing or adoption decision an engineering organisation makes.

Documentation and Knowledge Management

The Problem with Wikis

Wikis were a reasonable answer to knowledge management in 2005. In 2026, they are a liability masquerading as an asset. The core issue is that wikis are pull-based: information lives there until someone updates it, and that someone is rarely the same person who last touched the underlying system. The result is stale documentation that developers learn to distrust, which creates a vicious cycle — people stop writing to wikis because they know others will not read them, and others stop reading because they know the content is unreliable.

The shift that has worked for mature engineering organisations is moving toward documentation that is generated from, or tightly coupled to, the systems it describes. API documentation generated from OpenAPI specifications, architecture decision records (ADRs) versioned alongside code, runbooks maintained in the same repository as the services they cover — these approaches keep documentation alive because changing the system forces a decision about whether the documentation needs to change too.

Notion and Confluence: Structured Writing for Non-Code Artifacts

For documentation that does not map directly to code — product requirements, onboarding guides, team rituals, meeting notes — structured writing tools like Notion and Confluence remain useful, but only with deliberate governance. The critical practice is a single authoritative space per document type. If onboarding docs live in Notion, they should only live in Notion. The moment a document type has two canonical homes, neither is authoritative.

Confluence has the advantage of deep integration with Atlassian's Jira ecosystem, making it the natural choice for organisations already invested there. Notion's block-based structure is more flexible and generally preferred by smaller teams or those building internal tooling. Both are only as good as the discipline applied to maintaining them — and that discipline must be structural (enforced by tooling and process) rather than aspirational (relying on individual motivation).

Backstage: The Developer Portal Pattern

HashiCorp, Spotify, and a growing number of large engineering organisations have converged on the developer portal model, exemplified by Spotify's open-source Backstage platform. The core insight is that the relationship between a developer and the systems they own should be mediated by a single, structured interface that surfaces service ownership, API contracts, deployment status, incident history, and documentation together.

Backstage is not a simple installation — it requires meaningful investment to integrate with your CI/CD pipeline, service registry, cloud provider, and documentation sources. But organisations that have made that investment consistently report significant reductions in the time required for a new engineer to become productive on an unfamiliar service. The developer portal pattern is the closest thing to a consensus solution for the "I need context fast" problem that engineering at scale consistently produces.

// Example: Backstage catalog-info.yaml for a service
// This file lives in the service repository and registers the service with Backstage

apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: payment-processor
  description: Handles all payment transaction flows for checkout and subscriptions
  annotations:
    github.com/project-slug: acme/payment-processor
    pagerduty.com/integration-key: abc123xyz
    backstage.io/techdocs-ref: dir:.
  tags:
    - payments
    - critical-path
spec:
  type: service
  lifecycle: production
  owner: payments-team
  providesApis:
    - payment-processor-api
  dependsOn:
    - component:fraud-detection-service
    - resource:postgres-payments-db

This catalog entry, committed alongside the service code, means that Backstage can automatically surface the service owner, its API contracts, its dependencies, and its documentation from a single file. When the service changes, this file changes — and the developer portal reflects it.

Design Handoff and Frontend Collaboration

Why Handoff Remains Hard

The distance between a designer's intent and an engineer's implementation is one of the most persistent sources of rework in product development. It is not primarily a tools problem — it is a model problem. Designers work in the domain of visual composition and user experience; engineers work in the domain of component trees, state machines, and layout algorithms. These models are not naturally isomorphic, and no tool fully bridges that gap.

What tools can do is reduce the number of ambiguous decisions an engineer must make when translating a design into code. The closer a design tool's output is to the developer's working model, the less translation work is required, and the fewer opportunities exist for divergence.

Figma's Dev Mode

Figma's developer mode, available in professional and enterprise tiers, represents a meaningful improvement over inspecting designs in the standard editor. It surfaces computed CSS values, spacing tokens, and asset exports in a format oriented toward implementation rather than composition. The integration with design token pipelines — exporting named variables that map to your design system's CSS custom properties — is particularly useful for maintaining consistency between Figma and a production component library.

The practical limitation is that Figma Dev Mode shows you what something should look like, not how to build it. For teams with a mature component library, it works well because engineers can map a design to existing components and identify gaps. For teams without that foundation, it risks producing highly specific implementations that do not generalise — a different form of the same problem.

Storybook as the Source of Truth

Many engineering organisations have found Storybook to be the most effective bridge between design and engineering, precisely because it forces components to be developed in isolation and documented with their variants, states, and props made explicit. When a designer can navigate a live Storybook to see the full range of states a component supports, the design-to-implementation conversation changes from "build this" to "which existing component handles this case, and does it need a new variant?"

The key practice is treating Storybook as a living specification, not a historical record. Components should have stories for every significant state — loading, error, empty, populated — and those stories should be reviewed as part of the PR process for any component change. Teams that adopt this practice typically reduce design review cycles significantly because ambiguity is surfaced during development rather than during handoff.

// Example: Storybook story for a Button component with all relevant states documented
// Button.stories.tsx

import type { Meta, StoryObj } from "@storybook/react";
import { Button } from "./Button";

const meta: Meta<typeof Button> = {
  title: "Design System/Button",
  component: Button,
  parameters: {
    layout: "centered",
    docs: {
      description: {
        component:
          "Primary action component. Use `variant='primary'` for the main CTA per page. Destructive actions must use `variant='destructive'` with an additional confirmation step.",
      },
    },
  },
  argTypes: {
    variant: {
      control: "select",
      options: ["primary", "secondary", "ghost", "destructive"],
    },
    isLoading: { control: "boolean" },
    isDisabled: { control: "boolean" },
  },
};

export default meta;
type Story = StoryObj<typeof Button>;

export const Primary: Story = {
  args: { variant: "primary", children: "Confirm Payment" },
};

export const Loading: Story = {
  args: { variant: "primary", children: "Processing...", isLoading: true },
};

export const Destructive: Story = {
  args: { variant: "destructive", children: "Delete Account" },
};

Incident Response and Operational Collaboration

The Anatomy of a Well-Handled Incident

Incident response is the highest-stakes cross-functional collaboration scenario in software engineering. A production incident simultaneously demands fast diagnosis (an engineering problem), clear stakeholder communication (a business problem), and structured coordination between people who are often working under stress for the first time together on this specific failure mode. Tools that address only one of these dimensions tend to make the others worse.

The maturity model for incident response collaboration has three stages that most organisations pass through sequentially. The first is reactive chaos — incidents are handled via direct message with no structure, no designated roles, and no postmortem process. The second is process-heavy formalism — every incident triggers a rigid runbook, regardless of severity, creating overhead that makes engineers avoid escalating until situations are serious. The third is adaptive structure — clear roles and communication patterns that scale with incident severity, supported by tooling that surfaces context automatically rather than requiring engineers to seek it.

PagerDuty and Incident.io

PagerDuty remains the dominant alerting and on-call scheduling platform for larger engineering organisations. Its strength is integrations: it connects cleanly with every major monitoring tool (Datadog, Prometheus, Grafana, CloudWatch) and provides reliable escalation policies that enforce clear on-call ownership. The core discipline required is keeping alert routing rules current — stale PagerDuty routing is worse than no routing, because engineers assume the right person was paged when they were not.

Incident.io has emerged as a strong complement (and for some teams, replacement) for the incident coordination layer. It creates a structured Slack channel for each incident, automatically applies severity-based workflows, tracks timeline, assigns roles, and generates a draft postmortem from the conversation history. The value is in reducing the cognitive overhead of running an incident: the tool handles the ceremony so engineers can focus on diagnosis. For teams that have struggled to maintain postmortem culture because the process feels burdensome, incident.io's automatic draft generation often shifts the activation energy meaningfully.

Runbooks as Code

Runbooks — step-by-step guides for diagnosing and resolving known failure modes — are one of the most consistently undervalued assets in an engineering organisation. Most teams have them in some form, but they tend to live in a wiki, diverge from the systems they describe within weeks of being written, and prove difficult to navigate under incident-time cognitive load.

The practice of maintaining runbooks in the same repository as the service they cover, and linking them from the service's monitoring dashboards and alert notifications, addresses all three of these problems. When an alert fires in PagerDuty or Datadog, the notification can include a direct link to the relevant runbook. When the service changes, the runbook PR can be included in the same review cycle. This is the runbook-as-code pattern, and it is one of the highest-leverage improvements available to teams that have not yet adopted it.

# Example: Datadog monitor with runbook link embedded in alert message
# monitors/payment-processor-latency.yaml

name: "Payment Processor P99 Latency > 2s"
type: metric alert
query: "avg(last_5m):p99:trace.express.request{service:payment-processor} > 2"
message: |
  ## Payment Processor Latency Elevated

  P99 latency has exceeded 2s for the past 5 minutes.

  **Runbook:** https://github.com/acme/payment-processor/blob/main/runbooks/high-latency.md
  **Dashboard:** https://app.datadoghq.com/dashboard/abc-123
  **Owner:** @payments-oncall

  Common causes:
  - Downstream fraud-detection-service degradation
  - Database connection pool exhaustion (check /metrics/db-pool)
  - Unusual transaction volume (check /metrics/transaction-rate)

options:
  thresholds:
    critical: 2
    warning: 1.2
  notify_no_data: false
  renotify_interval: 30

API Collaboration and Contract Management

The Contract Problem in Distributed Systems

As service-oriented and microservice architectures have matured, API contracts — the formal definition of what a service exposes and what consumers can depend on — have become a critical coordination surface. A change to an API that is not communicated to consumers before deployment is one of the most common sources of unplanned incidents in distributed systems. The tools that address this are primarily in the API contract management space.

The OpenAPI specification (formerly Swagger) has become the de facto standard for documenting RESTful HTTP APIs. Its value is not just documentation — it is the foundation for generating client SDKs, mock servers, automated contract tests, and interactive documentation. Any team building internal or external HTTP APIs without an OpenAPI specification is accepting unnecessary coordination debt.

Postman and Bruno for API Development Collaboration

Postman's collaborative workspace model — where API collections, environments, and test suites are shared across a team — has made it the standard tool for API development collaboration in most organisations. The key practice is treating the Postman collection as a first-class artifact alongside the API implementation: collections should be version-controlled, reviewed in PRs, and kept synchronised with the OpenAPI spec.

Bruno, an open-source alternative that stores collections as plain text files directly in the repository, has gained adoption among teams that prefer to avoid SaaS dependency for their API tooling. Its git-native approach means API collection changes are reviewed in the same PR as the API implementation changes, which eliminates the drift that frequently develops between a Postman collection and the code it documents.

AsyncAPI for Event-Driven Systems

Teams building event-driven architectures face a contract management problem that OpenAPI does not address: defining the schema and semantics of asynchronous messages published to message brokers like Apache Kafka, RabbitMQ, or AWS SNS/SQS. The AsyncAPI specification fills this gap. It provides a formal, machine-readable contract for event-driven interfaces, enabling the same tooling ecosystem — documentation generation, client SDK generation, mock consumer generation — that OpenAPI provides for synchronous APIs.

The adoption pattern that works is treating AsyncAPI files as owned by the producing team and consumed by downstream teams via a shared schema registry. Confluent Schema Registry (for Kafka-based systems) and AWS Glue Schema Registry are the most commonly used implementations. The discipline required is enforcing schema validation at the producer before messages reach the broker — a practice that prevents the gradual schema drift that makes event-driven systems increasingly difficult to reason about over time.

# Example: Pydantic-based event schema with AsyncAPI-compatible structure
# events/payment_completed.py

from pydantic import BaseModel, Field
from datetime import datetime
from enum import Enum
from typing import Optional
import uuid


class PaymentMethod(str, Enum):
    CARD = "card"
    BANK_TRANSFER = "bank_transfer"
    WALLET = "wallet"


class PaymentCompletedEvent(BaseModel):
    """
    Published to: payments.completed
    Version: 2.1.0
    AsyncAPI spec: ./asyncapi/payment-events.yaml

    Breaking change policy: producers must support v2.x consumers for 90 days
    before deprecating fields. New required fields must be added as optional
    with a grace period.
    """

    event_id: str = Field(default_factory=lambda: str(uuid.uuid4()))
    event_version: str = Field(default="2.1.0", description="Schema version")
    occurred_at: datetime = Field(description="UTC timestamp of payment completion")
    payment_id: str = Field(description="Internal payment reference")
    amount_cents: int = Field(ge=1, description="Payment amount in smallest currency unit")
    currency: str = Field(min_length=3, max_length=3, description="ISO 4217 currency code")
    payment_method: PaymentMethod
    customer_id: str
    order_id: str
    merchant_id: str
    metadata: Optional[dict] = Field(default=None, description="Arbitrary consumer-defined metadata, max 4KB")

    class Config:
        json_schema_extra = {
            "example": {
                "event_id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
                "event_version": "2.1.0",
                "occurred_at": "2026-03-15T14:32:00Z",
                "payment_id": "pay_01HX2K9MNPQR3S4T",
                "amount_cents": 4999,
                "currency": "USD",
                "payment_method": "card",
                "customer_id": "cust_01HX2K9MNPQR3S4U",
                "order_id": "ord_01HX2K9MNPQR3S4V",
                "merchant_id": "merch_01HX2K9MNPQR3S4W",
            }
        }

Code Review and Async Collaboration

The Hidden Cost of Synchronous Code Review

Code review is one of the most important quality assurance activities in a software team, and also one of the most frequently misoptimised. Many teams treat code review as primarily a synchronous activity — a PR sits until a reviewer is available, a reviewer leaves comments, the author responds, the reviewer checks again. For PRs that involve cross-team changes or require input from multiple specialists, this model can introduce days of latency into changes that took hours to implement.

The practices that reduce this latency without sacrificing quality are largely process rather than tooling. Small, focused PRs are reviewed faster because they are cognitively cheaper to evaluate. Draft PRs that request early feedback on architecture before implementation is complete surface disagreements when reversing them is cheap. Explicit reviewer expectations in PR descriptions — "I need a review of the database schema in db/migrations/ specifically, implementation details can be async" — reduce the overhead of a reviewer trying to figure out what kind of review is wanted.

GitHub and Linear Integration Patterns

GitHub remains the dominant code review platform for the majority of engineering teams. Its value is not primarily the code review UI — which is competent but not exceptional — but the integration surface it provides. The GitHub Actions ecosystem means that CI/CD, security scanning, dependency updates, and deployment previews can all be triggered from the same platform where code review happens, reducing context switching for engineers.

The integration between GitHub and issue tracking systems — particularly Linear, which has established itself as the preferred tool for product and engineering issue management at many software companies — is worth investing in deliberately. Automating the connection between a Linear issue and a GitHub branch (Linear's git branch naming convention means issues are automatically linked to their PRs) eliminates the manual status-updating that consumes disproportionate time in teams that manage it by hand. When a PR is merged, the Linear issue should transition to an appropriate state automatically. When a PR is opened, reviewers should be able to navigate from the issue to the PR with a single click.

Conventional Commits and Automated Changelogs

One frequently overlooked dimension of code review collaboration is what happens after code is merged. Changelogs, release notes, and deployment summaries are valuable communication artifacts for cross-functional teams — product managers and customer success teams benefit from knowing what changed in a release — but they are time-consuming to maintain manually.

The Conventional Commits specification provides a structured commit message format (feat:, fix:, chore:, BREAKING CHANGE:) that enables automated changelog generation. Tools like semantic-release or changesets can parse commit history and generate changelogs, bump version numbers, and publish release notes automatically as part of a CI pipeline. For teams that invest a small upfront cost in adopting the convention, the return is a continuous, low-overhead communication channel between engineering and the rest of the organisation.

Trade-offs and Pitfalls

The Integration Tax

Every tool in a team's stack imposes an integration tax — the ongoing cost of keeping the tool configured correctly, handling version updates, managing authentication, training new team members, and paying for licences. This tax is not proportional to the value a tool provides; a tool that solves a narrow problem well can impose the same integration tax as one that solves a broad one.

The most common failure mode in tool adoption is evaluating tools in isolation rather than as part of a system. A team that adds PagerDuty, Incident.io, Datadog, Backstage, and a new documentation platform in a six-month period may find that the integration costs alone consume the productivity gains each tool was supposed to provide. The discipline is to add one tool at a time, measure its impact before adopting the next, and be willing to remove tools that are not delivering value — even if they were expensive to adopt.

The Tool-Process Confusion

Tools do not create process; they support it. This distinction matters because teams frequently adopt a tool hoping it will solve a process problem, and are then surprised when the problem persists. Incident.io does not create a postmortem culture — it reduces the friction of running a postmortem for a team that already has the discipline. Backstage does not create service ownership — it makes ownership visible for a team that has already assigned it. Conventional Commits does not improve release communication — it automates the capture of communications a team was already making manually.

The implication is that tool adoption must be preceded by process definition. Before adopting an incident management tool, agree on severity levels, role assignments, and communication cadences. Before adopting a developer portal, agree on service ownership and documentation standards. Before adopting a structured commit format, agree on what information belongs in a commit message and who is responsible for enforcing it. The tool then makes the agreed process cheaper to execute — not as a substitute for having the process.

Notification Fatigue and the Ambient Awareness Trap

A persistent failure mode in collaborative tool stacks is notification overload. The same integrations that make information flow between systems also make interruptions flow between systems. An organisation that has instrumented everything — Slack notifications from GitHub, Jira, PagerDuty, Datadog, Backstage, and deployment pipelines simultaneously — has not improved information availability; it has created an environment where every notification is indistinguishable from noise, and critical alerts are missed because they arrive in the same channel as minor status updates.

The technical solution is tiered notification routing: critical production alerts go to PagerDuty with hard interruption (phone calls if necessary); important-but-non-urgent notifications go to a dedicated Slack channel that engineers review at defined intervals; informational updates (deployment completions, dependency updates, CI results on non-main branches) are available on demand but do not push notifications. This architecture requires deliberate configuration and ongoing governance — every new integration should be evaluated for which notification tier it belongs to before it is connected.

Best Practices

Start With the Highest-Friction Workflow

Rather than auditing your entire toolstack simultaneously, identify the single workflow that creates the most friction for your team and address it directly. Ask engineers to name the one collaboration problem that costs them the most time in an average week. The most common answers — "I don't know who owns this service," "I can't find the runbook for this alert," "I don't know what the designer intended here," "I can't tell if my PR is blocked or just waiting" — each map to a specific tool category and a specific set of practices.

Solving one high-friction workflow completely is more valuable than partially addressing five. It also builds the organisational muscle for tool adoption — teams that have successfully navigated one adoption cycle are better at evaluating, rolling out, and maintaining tools than those who are attempting several simultaneously.

Treat Documentation as Engineering Work

Documentation that is treated as a separate, lower-status activity from engineering work tends to be poor and neglected. Teams that have the best documentation cultures treat writing and maintaining it as a first-class engineering responsibility. This means including documentation tasks in sprint planning, including documentation quality in code review, and evaluating engineers partly on the clarity and currency of their documentation contributions.

The operational implication is concrete: a PR that changes a service's behaviour without updating the relevant runbook, README, or API spec should not pass review. This norm must be established explicitly and enforced consistently — it cannot be aspirational. Once it is established, it tends to be self-reinforcing because engineers begin to experience the benefit of good documentation during incidents and onboarding, which creates motivation to maintain it.

Measure Collaboration Tool Effectiveness

Tool adoption without measurement is faith-based engineering. The metrics that matter for collaboration tooling are not vanity metrics (number of Confluence pages created, number of Slack integrations active) but outcome metrics: mean time to resolve incidents, time from PR open to merge, time for a new engineer to make their first production deployment, number of cross-team coordination issues raised in retrospectives.

Baseline these metrics before adopting a tool and track them after. A tool that does not improve at least one measurable outcome within a reasonable time horizon (typically 60–90 days) should be reviewed for whether it is being used correctly, whether it requires additional process changes to be effective, or whether it should be removed. Engineering organisations accumulate tool debt exactly as they accumulate technical debt — through a series of individually justified decisions that collectively impose a cost that becomes visible only much later.

80/20 Insight

If you implement nothing else from this article, three changes will deliver the majority of the available benefit from cross-functional collaboration tooling improvement:

The first is registering all production services in a central developer portal (Backstage or equivalent) with accurate ownership, dependency, and documentation metadata. This single change eliminates more coordination overhead than any other investment in the knowledge management category, because it answers the question "who owns this and where is its documentation?" with a single authoritative lookup.

The second is embedding runbook links in every production alert. Engineers responding to incidents should never have to navigate to find relevant documentation — the documentation should be one click away from the alert that triggered the response. This requires a small investment in alert template configuration and a policy that runbooks must exist before a service can be marked production-ready.

The third is adopting Conventional Commits with automated changelog generation. This investment is low (one afternoon to configure, one sprint to establish the habit), its impact is disproportionate (automated release notes, semver versioning, improved git history legibility), and it creates a communication channel between engineering and the rest of the organisation that operates at zero marginal cost per release.

Key Takeaways

Five steps you can apply immediately to improve cross-functional collaboration in your engineering team:

  1. Audit your current documentation locations — list every place a developer must look to understand a service, and identify which can be eliminated, merged, or automated. The goal is one authoritative location per information type.

  2. Link runbooks from monitoring alerts — pick your most critical service, identify its top three alert types, and add runbook links to each alert notification. Measure incident resolution time for those alert types before and after.

  3. Introduce a PR description template — add a .github/PULL_REQUEST_TEMPLATE.md that asks authors to specify what kind of review they need, which components are most important to review, and whether there are dependencies on other PRs or external teams.

  4. Assign explicit service ownership in your monitoring platform — every service should have a named team and a documented escalation path. Any alert that reaches PagerDuty without clear ownership information is a configuration failure.

  5. Adopt Conventional Commits on your highest-change-volume repository — install a commit-msg git hook or CI check to enforce the format, and configure a changelog generator to produce release notes automatically. Share the first auto-generated changelog with your product and customer success teams and ask if it meets their needs.

Conclusion

The collaboration tools that work in 2026 are not the ones with the most features or the most impressive demos — they are the ones that reduce the number of places a developer must look to do their job, enforce good practices structurally rather than aspirationally, and integrate cleanly into the workflows that already exist.

The consistent pattern across successful tool adoptions is that the tool supports a process that was already defined, rather than being expected to create the process itself. Developer portals work when service ownership is already assigned. Incident management tools work when severity levels and role assignments are already agreed. Design collaboration tools work when the component library already exists and is maintained. The tooling investment is highest-leverage when it follows organisational clarity, not when it is expected to produce it.

The opposite pattern — adopting tools in the hope that they will solve organisational problems — produces the tool sprawl that characterises engineering organisations at mid-scale. The antidote is deliberate, sequential adoption with explicit success metrics and the willingness to remove tools that do not earn their integration tax. Engineering organisations that apply this discipline to their collaboration toolstack will find, consistently, that they need fewer tools than they thought, and that those tools work better for having been chosen carefully.

References