Introduction
The promise of autonomous AI agents—systems capable of breaking down complex tasks, making decisions, and executing multi-step workflows without constant human intervention—has captivated the software engineering community. As organizations deploy these agents into production environments, however, a less discussed challenge has emerged: the economics of autonomy. Unlike traditional software that consumes fixed computational resources, AI agents operating through large language models (LLMs) incur costs with every token processed. When agents iterate, backtrack, or explore solution spaces without constraints, token consumption can spiral unpredictably, turning what should be an efficiency gain into a financial liability.
This article explores an emerging practice at the intersection of FinOps and software architecture: using formal specifications to create deterministic roadmaps that guide agent behavior while controlling costs. We'll examine how spec-driven development—long established in traditional software engineering—can be adapted to constrain agent autonomy in productive ways, reducing wasteful exploration while preserving the problem-solving capabilities that make agents valuable. The techniques discussed are based on established software engineering principles applied to the novel challenge of managing autonomous AI systems in production.
The Cost Problem with Autonomous AI Agents
The economic model of AI agents differs fundamentally from traditional software systems. A conventional microservice might handle thousands of requests for a predictable infrastructure cost, scaling primarily on compute and memory. An autonomous agent, by contrast, generates costs proportional to the tokens it processes—both input context and generated output—across potentially unbounded reasoning cycles. This creates a unique challenge: the very autonomy that makes agents valuable also introduces cost unpredictability.
Consider a typical autonomous agent tasked with debugging a production issue. Without constraints, the agent might examine logs, hypothesize causes, generate test cases, evaluate results, refine hypotheses, and repeat this cycle multiple times. Each iteration consumes tokens: reading context, reasoning about the problem, generating code or queries, and evaluating outcomes. In a worst-case scenario, the agent might enter recursive loops—repeatedly exploring similar solution paths or cycling through increasingly marginal refinements. A task that a human engineer might complete with focused analysis becomes an expensive token-burning exercise. Real-world deployments have reported agents consuming millions of tokens on tasks that should require thousands, with costs scaling linearly with this inefficiency.
The root cause isn't the agent's capability but the absence of structure in its problem-solving process. LLM-based agents excel at pattern matching and generation but lack inherent cost awareness. They optimize for task completion, not resource efficiency. When given open-ended instructions like "fix the performance issue" or "implement the feature," agents default to exploratory behavior patterns that mirror how humans might approach unfamiliar problems—but without the implicit cost consciousness that guides human efficiency. The result is that agent autonomy without economic guardrails becomes a liability rather than an asset.
This challenge intensifies at scale. An organization running dozens or hundreds of agent workflows daily can quickly accumulate six-figure monthly API costs. Unlike traditional cloud infrastructure where autoscaling and resource limits provide predictable cost ceilings, agent costs are bounded only by the complexity of tasks and the efficiency of their execution paths. This creates a new category of technical debt: architectural patterns that enable functionally correct agent behavior while incurring unsustainable operational costs. The question becomes not whether agents can solve problems autonomously, but whether they can do so within economic constraints that justify their deployment.
Understanding Spec-Driven Agent Architecture
Specification-driven development provides a solution to the cost problem by introducing structured constraints that guide agent behavior along deterministic paths. At its core, a specification is a formal or semi-formal description of what a system should do, the interfaces it should implement, and the constraints it must respect. In traditional software engineering, specs range from API contracts and type definitions to formal methods like Z notation or TLA+. When applied to autonomous agents, specifications serve a dual purpose: they define the problem space precisely, and they create a roadmap that reduces exploratory overhead.
The fundamental insight is that most tasks assigned to AI agents in production environments are not truly open-ended. They operate within well-defined domains—code generation follows language syntax and project conventions, data analysis works with known schemas, and system troubleshooting has established diagnostic procedures. By codifying these constraints into specifications, we provide agents with structured context that narrows the solution space from the outset. This is conceptually similar to how type systems prevent entire classes of programming errors by constraining what operations are valid, except specifications constrain the agent's reasoning process rather than code execution.
A spec-driven agent architecture typically consists of three layers: the specification layer, which defines task requirements, constraints, and success criteria; the planning layer, where the agent generates a structured execution plan based on the spec; and the execution layer, where the agent carries out the plan with checkpoints that validate conformance to the specification. This architectural separation ensures that agents don't simply start generating tokens in response to prompts but instead engage in a structured problem-solving process with clear phases and validation gates.
The cost benefits emerge from eliminating redundant exploration. Without specifications, an agent might generate multiple solution approaches, implement each partially, evaluate them, and iteratively refine—consuming tokens at every step. With specifications, the agent begins with a clear understanding of requirements, constraints, and acceptable solutions. This frontloads the thinking process into a structured planning phase that, while still consuming tokens, does so in a bounded and predictable way. The subsequent execution phase follows a deterministic path with validation checkpoints, preventing the agent from drifting into expensive revisions or exploratory dead ends.
Consider the difference in token economics. An unconstrained agent debugging a performance issue might examine hundreds of log lines, generate several hypotheses, implement diagnostic code for each, and iterate through this cycle multiple times—potentially consuming 50,000 to 100,000 tokens. A spec-driven agent receives a specification that defines: the performance metric to optimize, the acceptable investigation scope (specific services or time windows), the diagnostic procedure to follow, and the format for results. This specification might add 2,000 tokens to the initial context but enables the agent to generate a focused execution plan that completes in 15,000 to 20,000 tokens total. The specification cost is amortized across the entire task, yielding a 60-75% reduction in token consumption.
Implementation Patterns and Examples
Implementing spec-driven agent architectures requires translating abstract principles into concrete patterns that integrate with existing development workflows. The following patterns represent proven approaches for different categories of agent tasks, each balancing specification overhead against cost reduction benefits.
Pattern 1: Schema-Based Task Specifications
For agents performing data operations—ETL workflows, analysis pipelines, or report generation—schema-based specifications provide a lightweight yet effective constraint mechanism. The specification defines input data schemas, transformation rules, output formats, and validation criteria using standard schema languages like JSON Schema or Protocol Buffers.
interface DataAnalysisSpec {
task: string;
inputSchema: {
source: string;
schema: JSONSchema;
sampleData?: any[];
};
transformations: {
operations: string[];
constraints: string[];
};
outputFormat: {
schema: JSONSchema;
validationRules: string[];
};
costConstraints: {
maxTokens: number;
timeoutSeconds: number;
};
}
class SpecDrivenDataAgent {
async executeTask(spec: DataAnalysisSpec): Promise<AnalysisResult> {
// Phase 1: Validate specification completeness (low token cost)
const validationResult = await this.validateSpec(spec);
if (!validationResult.valid) {
throw new SpecificationError(validationResult.errors);
}
// Phase 2: Generate execution plan from spec (bounded token cost)
const plan = await this.generatePlan(spec);
const estimatedCost = this.estimateTokenCost(plan);
if (estimatedCost > spec.costConstraints.maxTokens) {
throw new CostExceededError(
`Estimated ${estimatedCost} exceeds limit ${spec.costConstraints.maxTokens}`
);
}
// Phase 3: Execute plan with checkpoints
return await this.executePlanWithValidation(plan, spec);
}
private async generatePlan(spec: DataAnalysisSpec): Promise<ExecutionPlan> {
const prompt = `Given this data analysis specification:
Input: ${JSON.stringify(spec.inputSchema)}
Required transformations: ${spec.transformations.operations.join(', ')}
Output format: ${JSON.stringify(spec.outputFormat)}
Generate a step-by-step execution plan that:
1. Validates input data against schema
2. Applies transformations in sequence
3. Validates output against schema and rules
4. Minimizes redundant operations
Return plan as JSON with steps, dependencies, and validation checkpoints.`;
// This prompt is focused and bounded by the spec, resulting in
// deterministic planning output rather than exploratory generation
return await this.llm.generatePlan(prompt);
}
}
This pattern reduces token waste by constraining the agent's context to relevant schemas and explicit transformation requirements. Instead of the agent inferring what "analyze customer data" means through multi-turn conversation, the specification provides unambiguous input-output mappings. The cost constraint in the spec itself creates a hard limit, forcing the agent to generate efficient plans or fail fast rather than consuming excessive tokens.
Pattern 2: State Machine Specifications for Multi-Step Workflows
For agents orchestrating complex workflows—deployment pipelines, incident response procedures, or multi-stage code generation—state machine specifications provide deterministic execution paths with clear transitions and rollback points.
from enum import Enum
from typing import Dict, List, Optional
from dataclasses import dataclass
class WorkflowState(Enum):
PLANNING = "planning"
VALIDATION = "validation"
EXECUTION = "execution"
VERIFICATION = "verification"
COMPLETION = "completion"
ROLLBACK = "rollback"
@dataclass
class StateTransitionSpec:
from_state: WorkflowState
to_state: WorkflowState
condition: str
max_tokens: int
validation_checkpoint: str
@dataclass
class WorkflowSpec:
name: str
initial_state: WorkflowState
final_states: List[WorkflowState]
transitions: List[StateTransitionSpec]
global_constraints: Dict[str, any]
rollback_conditions: List[str]
class StateMachineAgent:
def __init__(self, spec: WorkflowSpec):
self.spec = spec
self.current_state = spec.initial_state
self.token_budget = spec.global_constraints.get('max_total_tokens', 100000)
self.tokens_consumed = 0
async def execute_workflow(self, context: Dict) -> WorkflowResult:
"""Execute workflow following state machine specification"""
execution_log = []
while self.current_state not in self.spec.final_states:
# Get valid transitions from current state
valid_transitions = self._get_valid_transitions(self.current_state)
if not valid_transitions:
return await self._handle_rollback("No valid transitions available")
# Select transition based on conditions (deterministic, not exploratory)
transition = await self._select_transition(valid_transitions, context)
# Execute transition with token budget enforcement
try:
result = await self._execute_transition(transition, context)
execution_log.append(result)
self.tokens_consumed += result.tokens_used
# Check token budget
if self.tokens_consumed > self.token_budget:
return await self._handle_rollback(
f"Token budget exceeded: {self.tokens_consumed}/{self.token_budget}"
)
# Validate transition completed successfully
if not await self._validate_checkpoint(transition.validation_checkpoint, result):
return await self._handle_rollback(
f"Validation failed at checkpoint: {transition.validation_checkpoint}"
)
# Transition to next state
self.current_state = transition.to_state
except Exception as e:
return await self._handle_rollback(f"Transition failed: {str(e)}")
return WorkflowResult(
success=True,
final_state=self.current_state,
tokens_consumed=self.tokens_consumed,
execution_log=execution_log
)
async def _select_transition(
self,
transitions: List[StateTransitionSpec],
context: Dict
) -> StateTransitionSpec:
"""Select next transition based on spec conditions, not open-ended reasoning"""
# Build focused prompt from specification
conditions_context = "\n".join([
f"- Transition to {t.to_state.value} if: {t.condition}"
for t in transitions
])
prompt = f"""Given current workflow state and context, select the appropriate transition.
Available transitions:
{conditions_context}
Current context:
{self._serialize_relevant_context(context)}
Select the transition that matches current conditions. Return only the target state name.
Do not explain or explore alternatives."""
# This prompt is tightly scoped by the spec, producing deterministic selection
# rather than exploratory reasoning about which path might be best
selected_state = await self.llm.complete(prompt, max_tokens=50)
return next(t for t in transitions if t.to_state.value == selected_state.strip())
The state machine pattern eliminates a major source of token waste: agents uncertainty about what to do next. Each state has explicitly defined transitions with clear conditions, removing the need for the agent to reason about the overall workflow structure. The agent operates within the deterministic framework provided by the specification, consuming tokens only for state-specific operations rather than meta-reasoning about task structure.
Pattern 3: Contract-Driven Code Generation
For agents generating code—implementing features, fixing bugs, or refactoring—contract-based specifications that define interfaces, types, test cases, and architectural constraints provide a structured generation framework.
interface CodeGenerationContract {
task: string;
targetLanguage: string;
interfaces: {
name: string;
methods: MethodSignature[];
documentation: string;
}[];
typeConstraints: {
inputTypes: TypeDefinition[];
outputTypes: TypeDefinition[];
genericConstraints?: string[];
};
testCases: {
description: string;
input: any;
expectedOutput: any;
constraints?: string[];
}[];
architecturalConstraints: {
patterns: string[];
prohibitions: string[];
dependencies: string[];
};
qualityGates: {
typeCheck: boolean;
testCoverage: number;
linting: boolean;
};
}
class ContractDrivenCodeAgent {
async generateCode(contract: CodeGenerationContract): Promise<GeneratedCode> {
// Phase 1: Validate contract completeness
this.validateContract(contract);
// Phase 2: Generate implementation plan (not code yet)
const implementationPlan = await this.planImplementation(contract);
// Phase 3: Generate code following plan and contract
const code = await this.generateFromContract(contract, implementationPlan);
// Phase 4: Validate against contract (fail fast if non-conformant)
const validation = await this.validateAgainstContract(code, contract);
if (!validation.passes) {
// Single refinement pass allowed, not open-ended iteration
return await this.refineCode(code, contract, validation.failures);
}
return code;
}
private async planImplementation(
contract: CodeGenerationContract
): Promise<ImplementationPlan> {
const prompt = `Generate implementation plan for the following contract:
Interfaces to implement:
${contract.interfaces.map(i => this.formatInterface(i)).join('\n\n')}
Type constraints:
${this.formatTypeConstraints(contract.typeConstraints)}
Must pass these test cases:
${contract.testCases.map(tc => `- ${tc.description}`).join('\n')}
Architectural requirements:
- Use patterns: ${contract.architecturalConstraints.patterns.join(', ')}
- Avoid: ${contract.architecturalConstraints.prohibitions.join(', ')}
Generate a structured implementation plan with:
1. Main classes/functions to implement
2. Dependencies between components
3. Order of implementation
4. Validation checkpoints
Return as JSON. Do not generate code yet.`;
return await this.llm.generatePlan(prompt);
}
}
This pattern leverages the fact that most code generation tasks in production aren't creative exercises—they're implementations of known interfaces within established architectural patterns. By specifying interfaces, types, tests, and constraints upfront, we eliminate the exploratory phase where agents might generate multiple implementation approaches, compare them, and iterate. The agent generates a plan conformant to the contract, implements it, and validates once. This typically reduces token consumption by 50-70% compared to unconstrained "implement feature X" prompts.
Trade-offs and Considerations
Adopting spec-driven agent architectures introduces engineering trade-offs that organizations must evaluate against their specific use cases and cost sensitivities. The primary trade-off is upfront investment in specification development versus long-term token cost savings. Creating detailed specifications requires engineer time—writing schemas, defining state machines, or documenting contracts. For one-off tasks or rapidly changing requirements, this investment may exceed the token costs saved. The economics favor spec-driven approaches when tasks are repeated, when agents operate at scale, or when unconstrained token consumption poses financial risk.
The second consideration is reduced agent flexibility. Specifications constrain agents to predetermined solution paths, which optimizes for known problem types but may underperform on novel scenarios requiring creative problem-solving. An agent constrained by a state machine specification cannot dynamically discover a more efficient workflow path that wasn't anticipated when the spec was written. This trade-off mirrors the broader tension in software architecture between flexibility and optimization—highly optimized systems excel in their designed domains but adapt poorly to changing requirements. Organizations must assess whether their agent use cases are sufficiently stable and well-understood to benefit from specification-based constraints, or whether they require the adaptability that comes with higher token costs.
A third consideration is the complexity of specification maintenance. As systems evolve, specifications must be updated to reflect new requirements, API changes, or architectural patterns. This creates an ongoing maintenance burden similar to maintaining comprehensive test suites or API documentation. In rapidly evolving codebases, specification drift—where specs become outdated relative to actual system behavior—can cause agents to generate incorrect or suboptimal solutions. Organizations need processes for keeping specifications synchronized with system evolution, which may require tooling for automated spec generation from code or continuous validation of spec-to-system alignment.
There's also a risk of over-specification creating brittleness. Extremely detailed specifications that prescribe implementation details rather than interfaces and constraints can make agents fragile to minor environmental changes. For example, a code generation contract that specifies exact library versions or file paths may cause agent failures when dependencies update. The art of effective specification-writing for agents involves finding the right level of abstraction—detailed enough to constrain wasteful exploration, but abstract enough to accommodate reasonable variation in implementation approaches and environmental conditions.
Finally, organizations must consider the measurement and attribution challenge. Token costs are easy to measure in aggregate but difficult to attribute to specific agent inefficiencies. Without detailed telemetry that tracks token consumption across specification phases, planning, execution, and validation, it's challenging to demonstrate the ROI of spec-driven approaches empirically. Effective adoption requires instrumentation infrastructure that measures not just total token usage but the breakdown across agent reasoning phases, enabling teams to identify which specifications are providing cost benefits and which need refinement.
Best Practices for Spec-Driven Agent Development
Successful implementation of spec-driven agent architectures requires disciplined engineering practices that balance specification rigor with practical development velocity. The following best practices emerge from production deployments and established software engineering principles adapted to the agent context.
Start with Schema-Based Specifications for Data-Intensive Tasks. For agents working with structured data—analysis, transformation, or migration tasks—JSON Schema or similar schema languages provide the highest ROI with the lowest specification overhead. These specifications integrate naturally with existing data validation tooling and require minimal additional infrastructure. Begin by retrofitting existing data workflows with schema specifications to quantify token savings before investing in more complex specification approaches. This establishes baselines and builds organizational capability incrementally.
Version and Test Specifications Like Production Code. Specifications should be version-controlled, peer-reviewed, and tested for completeness and correctness before deployment. Treat specifications as first-class artifacts in your CI/CD pipeline with validation stages that ensure specifications are syntactically correct, semantically meaningful, and compatible with target agent systems. Implement automated testing that runs agents against specifications using synthetic inputs to verify that specifications produce deterministic, cost-efficient agent behavior. This prevents specification defects from causing agent failures or unexpected cost escalations in production.
Implement Progressive Specification Adoption. Rather than attempting to specify all agent tasks comprehensively upfront, adopt a progressive approach that prioritizes high-cost or high-frequency tasks. Instrument agent systems to measure token consumption per task type, then create specifications for the tasks in the top 20% of token usage—this typically addresses 60-80% of total costs due to power-law distributions in agent workloads. Gradually expand specification coverage as specifications prove their value and as patterns emerge that can be templatized across similar tasks.
Build Specification Libraries and Templates. As you develop specifications for common task patterns—REST API interactions, database operations, code generation within specific frameworks—extract these into reusable templates. Create a specification library that engineers can instantiate for new tasks rather than writing specifications from scratch. This approach amortizes the specification development cost across multiple uses and ensures consistency in how agents are constrained across your organization. Template libraries also serve as documentation of agent-friendly task structures.
Establish Cost Budgets and Fail-Fast Mechanisms. Every specification should include explicit token budgets and timeout constraints that cause agents to fail gracefully when they approach economic limits. Implement monitoring that alerts when agents consistently hit budget limits, as this indicates either underspecified tasks or genuinely complex problems that may require different approaches. Failing fast on expensive tasks prevents runaway costs and creates feedback loops that drive specification improvement.
Separate Planning from Execution Token Budgets. Allocate token budgets separately for the planning phase versus execution phase, typically with a 20/80 or 30/70 split. This ensures agents invest sufficient tokens in generating good execution plans based on specifications while preventing excessive planning-phase optimization. Monitor the planning-to-execution ratio for different task types to identify where specifications need more detail (high execution variation) or where planning is over-elaborate (high planning cost with low execution variance).
Integrate Specifications with Observability Infrastructure. Instrument agent systems to emit structured logs and metrics that correlate token consumption with specification conformance. Track metrics like tokens-per-task-type, specification-validation-failures, and plan-execution-deviation. This telemetry enables continuous optimization of specifications based on empirical cost and quality data. Use distributed tracing to visualize agent execution paths relative to specification-defined state machines or workflows, making it easier to identify where agents deviate from intended paths.
Design Specifications for Human Readability. While specifications are consumed by agents, they must be maintained by engineers. Use clear naming, documentation, and structure that makes specifications understandable without deep context. This facilitates peer review, onboarding, and debugging. Consider specifications as a form of documentation that describes not just what agents should do, but the reasoning behind task structure and constraints. Well-documented specifications become organizational knowledge artifacts that outlive specific agent implementations.
Measuring ROI and Cost Optimization
Quantifying the return on investment from spec-driven agent architectures requires establishing baseline metrics before specification adoption and tracking comparative performance afterward. The primary metric is token cost per task completion, measured across statistically significant samples of similar tasks. For meaningful comparison, segment tasks by type and complexity, as comparing token usage across dissimilar tasks obscures whether cost changes result from specifications or task mix variation.
Begin by instrumenting existing agent systems to capture: task type, total tokens consumed (input plus output), number of reasoning iterations, task completion time, and success rate. Run agents on representative task sets for a baseline period—typically two to four weeks for sufficient sample size—to establish distributions of token consumption per task type. These baselines reveal both central tendencies (median token usage) and variation (90th percentile costs), which specifications should aim to reduce.
After implementing specifications for a task type, measure the same metrics under identical conditions. Effective specifications typically show: 40-70% reduction in median token consumption, 60-80% reduction in 90th percentile costs (by eliminating exploratory outliers), and 20-40% reduction in task completion time. The quality metrics—task success rate and solution correctness—should remain constant or improve, as specifications that reduce costs while decreasing quality indicate over-constraint. If success rates decline, specifications likely over-constrain valid solution approaches and require relaxation.
Calculate the total cost of ownership by combining token cost savings with specification development and maintenance costs. Specification development for a moderately complex task type typically requires 4-8 engineer hours initially, plus 1-2 hours monthly for maintenance and updates. Token savings compound over the specification's lifetime across all task executions. The break-even point depends on task frequency: high-frequency tasks (daily or more) typically reach positive ROI within one to two weeks, while low-frequency tasks (weekly or monthly) may require several months to justify specification investment.
Beyond direct cost metrics, track operational benefits that contribute to ROI but are harder to quantify: reduced cost unpredictability improves budget forecasting, deterministic agent behavior simplifies debugging and root cause analysis, and specification libraries reduce time-to-deployment for new agent tasks. These second-order benefits often justify specification investments even when direct token savings alone provide marginal ROI.
Conclusion
The emergence of autonomous AI agents as production software components introduces a novel economic challenge: systems whose costs are proportional to their reasoning process rather than deterministic computational resources. As organizations scale agent deployments, unconstrained autonomy becomes an economic liability, with agents consuming excessive tokens through exploratory reasoning, recursive loops, and unbounded iteration. The solution lies not in reducing agent capabilities but in applying established software engineering discipline—specification-driven development—to create structured constraints that guide agent behavior along cost-efficient paths.
Spec-driven agent architectures leverage schemas, state machines, and contracts to provide agents with deterministic roadmaps that reduce exploratory overhead while preserving problem-solving capability. By frontloading task structure into specifications, organizations achieve 40-70% token cost reductions on typical workloads while gaining predictability, debuggability, and operational stability. The approach requires upfront investment in specification development and ongoing maintenance, making it best suited for repeated tasks, high-frequency workloads, or scenarios where unconstrained costs pose financial risk.
The practice of agentic FinOps—applying financial operations discipline to AI agent systems—is still emerging, and spec-driven development represents one architectural pattern among a broader toolkit that will evolve as agent deployments mature. Organizations adopting agents should view specifications not as constraints on capability but as engineering investments that make autonomy economically sustainable. As with other software engineering practices, the goal is not maximum flexibility but the right balance of flexibility and constraint for the problems being solved. Specifications make that balance explicit, measurable, and optimizable over time.
References
- JSON Schema Specification - JSON Schema Core and Validation specifications, https://json-schema.org/specification.html
- Protocol Buffers Documentation - Google's language-neutral data serialization format, https://protobuf.dev/
- OpenAPI Specification - Standard for describing REST APIs, https://spec.openapis.org/
- State Machines in Software Engineering - Harel, D. (1987). "Statecharts: A Visual Formalism for Complex Systems." Science of Computer Programming, 8(3), 231-274.
- Design by Contract - Meyer, B. (1992). "Applying Design by Contract." IEEE Computer, 25(10), 40-51.
- FinOps Foundation - Cloud Financial Management best practices, https://www.finops.org/
- LangChain Documentation - Framework for developing applications with LLMs, including agent patterns, https://python.langchain.com/
- Semantic Kernel Documentation - Microsoft's SDK for integrating LLMs with conventional programming, https://learn.microsoft.com/en-us/semantic-kernel/
- Token Economics in Language Models - OpenAI API Documentation on token counting and pricing, https://platform.openai.com/docs/
- Software Architecture Patterns - Richards, M. and Ford, N. (2020). "Fundamentals of Software Architecture." O'Reilly Media.
- Formal Methods in Software Engineering - Wing, J.M. (1990). "A Specifier's Introduction to Formal Methods." IEEE Computer, 23(9), 8-24.
- Type Systems for Programming Languages - Pierce, B.C. (2002). "Types and Programming Languages." MIT Press.