Drizzle ORM Quickstart: From Zero to Type-Safe Queries in a Real ProjectSet up schema, migrations, relations, and CRUD patterns with a minimal, production-lean starter workflow.

Introduction

The TypeScript ORM landscape has historically forced developers into a difficult choice: sacrifice type safety and developer experience for simplicity, or embrace complex decorator-based systems that introduce runtime overhead and magic abstractions. Drizzle ORM emerged in this space as a lightweight, SQL-centric alternative that prioritizes TypeScript's type system while maintaining a close-to-the-metal relationship with your database. Unlike traditional ORMs that abstract SQL away, Drizzle treats SQL as a first-class citizen, giving you type-safe query builders that map directly to the SQL you'd write by hand.

This quickstart guide walks through building a real database layer with Drizzle ORM from initial setup to production-ready patterns. We'll construct a multi-table schema with relations, implement a migrations workflow, and establish CRUD patterns that leverage Drizzle's type inference capabilities. Whether you're building a new service or evaluating alternatives to Prisma, TypeORM, or raw SQL, this hands-on approach demonstrates the core workflow you'll use in actual projects. By the end, you'll have a working reference implementation and the mental models needed to reason about schema evolution, query construction, and relation handling in Drizzle.

Why Drizzle ORM? Understanding the Design Philosophy

Drizzle ORM positions itself as a "thin TypeScript layer over SQL" rather than a traditional ORM that creates its own query language. This design choice stems from a specific philosophy: developers who already understand SQL shouldn't need to learn a proprietary query abstraction, and those who don't understand SQL should learn it rather than rely on leaky abstractions. The library provides zero runtime dependencies for core functionality, with database drivers being the only required peer dependencies. This keeps bundle sizes minimal and eliminates the hidden costs of abstraction layers that plague heavier ORMs.

The type safety story is where Drizzle truly differentiates itself. Instead of using decorators or code generation steps, Drizzle infers types directly from your schema definitions using TypeScript's type system. When you define a table, the type information flows through your entire application: insert objects, select results, update payloads, and query conditions all become fully typed without additional configuration. This compile-time safety catches schema mismatches, typos, and type errors before runtime, while still producing SQL that matches what you'd write manually.

From a performance perspective, Drizzle introduces minimal overhead between your application code and the database driver. There's no lazy loading machinery, no automatic N+1 query generation, and no hidden SELECT * statements. You write queries that compile to specific SQL, and you can inspect the generated SQL at any time. This transparency makes performance optimization straightforward: if a query is slow, you're debugging SQL you understand rather than trying to reverse-engineer what the ORM generated. For teams that value control and predictability over convenience features, this trade-off aligns with building maintainable systems.

Project Setup and Core Configuration

Setting up Drizzle requires three fundamental pieces: the core Drizzle ORM package, a database driver, and Drizzle Kit for migrations and introspection. For this quickstart we'll use PostgreSQL, but the workflow translates directly to MySQL, SQLite, or any of Drizzle's supported databases. Start by installing the necessary dependencies in your TypeScript project:

npm install drizzle-orm postgres
npm install -D drizzle-kit @types/node tsx

The drizzle-orm package provides the query builder and type inference, postgres is our database driver, and drizzle-kit handles schema migrations and generation. Using tsx allows us to run TypeScript files directly during development without a separate build step. Next, create a configuration file that tells Drizzle Kit where to find your schema and how to connect to your database:

import { defineConfig } from 'drizzle-kit';

export default defineConfig({
  schema: './src/db/schema.ts',
  out: './drizzle',
  dialect: 'postgresql',
  dbCredentials: {
    host: process.env.DB_HOST!,
    port: Number(process.env.DB_PORT),
    user: process.env.DB_USER!,
    password: process.env.DB_PASSWORD!,
    database: process.env.DB_NAME!,
  },
});

This configuration establishes three key paths: where your schema definitions live (schema), where generated migration files should be written (out), and how to connect to your database for introspection and migration execution. The dialect field tells Drizzle which SQL variant to generate. In production environments, you'd typically pull credentials from a secrets manager or environment-specific configuration rather than directly from process.env, but this pattern works well for local development.

Now create the database connection instance that your application will use for queries. This connection should be initialized once and reused across your application rather than creating new connections per request:

import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';

const connectionString = process.env.DATABASE_URL!;

// Create postgres client for querying
const client = postgres(connectionString);

// Create drizzle instance
export const db = drizzle(client);

The separation between the driver client and the Drizzle instance is intentional. The postgres client handles connection pooling and communication with the database, while the drizzle() wrapper adds the type-safe query builder interface. You'll import this db object throughout your application to execute queries. For projects with multiple database connections (read replicas, separate databases), you'd create multiple instances with different connection strings and configuration.

Schema Design and Type Safety

Drizzle schemas are defined using TypeScript functions that describe your table structure. Unlike decorator-based ORMs, these definitions are just objects that both Drizzle and TypeScript's type system can understand. Let's build a practical multi-table schema for a blog platform with users, posts, and comments:

import { pgTable, serial, text, varchar, timestamp, integer, boolean } from 'drizzle-orm/pg-core';
import { InferSelectModel, InferInsertModel } from 'drizzle-orm';

export const users = pgTable('users', {
  id: serial('id').primaryKey(),
  email: varchar('email', { length: 255 }).notNull().unique(),
  username: varchar('username', { length: 50 }).notNull().unique(),
  passwordHash: text('password_hash').notNull(),
  displayName: text('display_name'),
  isActive: boolean('is_active').default(true).notNull(),
  createdAt: timestamp('created_at').defaultNow().notNull(),
  updatedAt: timestamp('updated_at').defaultNow().notNull(),
});

export const posts = pgTable('posts', {
  id: serial('id').primaryKey(),
  authorId: integer('author_id').notNull().references(() => users.id),
  title: varchar('title', { length: 200 }).notNull(),
  slug: varchar('slug', { length: 250 }).notNull().unique(),
  content: text('content').notNull(),
  excerpt: text('excerpt'),
  isPublished: boolean('is_published').default(false).notNull(),
  publishedAt: timestamp('published_at'),
  createdAt: timestamp('created_at').defaultNow().notNull(),
  updatedAt: timestamp('updated_at').defaultNow().notNull(),
});

export const comments = pgTable('comments', {
  id: serial('id').primaryKey(),
  postId: integer('post_id').notNull().references(() => posts.id),
  authorId: integer('author_id').notNull().references(() => users.id),
  content: text('content').notNull(),
  isApproved: boolean('is_approved').default(false).notNull(),
  createdAt: timestamp('created_at').defaultNow().notNull(),
  updatedAt: timestamp('updated_at').defaultNow().notNull(),
});

// Type exports for use throughout the application
export type User = InferSelectModel<typeof users>;
export type NewUser = InferInsertModel<typeof users>;
export type Post = InferSelectModel<typeof posts>;
export type NewPost = InferInsertModel<typeof posts>;
export type Comment = InferSelectModel<typeof comments>;
export type NewComment = InferInsertModel<typeof comments>;

Each table definition uses pgTable (for PostgreSQL) with column definitions that map directly to SQL types. The serial type creates auto-incrementing integer primary keys, while varchar, text, boolean, and timestamp map to their PostgreSQL equivalents. The .notNull(), .unique(), and .default() modifiers translate directly to SQL constraints. Foreign key relationships use .references() with a function that returns the target column, allowing Drizzle to understand the relationship structure at compile time.

The type exports at the bottom demonstrate Drizzle's type inference capabilities. InferSelectModel generates a type representing rows returned from SELECT queries, including computed defaults and auto-generated fields. InferInsertModel generates a type for INSERT operations, making optional any fields with defaults or auto-increment behavior. These types flow through your entire application: API request validators, service layer functions, and database queries all share the same type definitions derived from your schema. If you add a column or change a constraint, TypeScript immediately flags every location that needs updating.

The schema-as-code approach means your database structure lives in version control alongside your application logic. Teams can review schema changes through standard pull request workflows, and the schema serves as living documentation of your database structure. Unlike migration-first systems where you write SQL migrations and separately maintain types, or decorator-based systems where metadata lives in class definitions, Drizzle's schema file is both the source of truth and the type generator. This reduces the cognitive overhead of keeping multiple representations synchronized.

Migrations Workflow: Evolving Your Schema

Drizzle Kit analyzes your schema definitions and generates SQL migration files that represent the diff between your current schema and your previous state. This approach combines the benefits of declarative schema definitions with the control of imperative SQL migrations. To generate your initial migration after defining the schema above, run:

npx drizzle-kit generate:pg

This command reads your drizzle.config.ts, examines your schema file, and creates a new migration in the drizzle directory (or wherever you configured out). The generated migration contains raw SQL that creates your tables with all constraints, indexes, and defaults properly specified:

CREATE TABLE IF NOT EXISTS "users" (
  "id" serial PRIMARY KEY NOT NULL,
  "email" varchar(255) NOT NULL UNIQUE,
  "username" varchar(50) NOT NULL UNIQUE,
  "password_hash" text NOT NULL,
  "display_name" text,
  "is_active" boolean DEFAULT true NOT NULL,
  "created_at" timestamp DEFAULT now() NOT NULL,
  "updated_at" timestamp DEFAULT now() NOT NULL
);

CREATE TABLE IF NOT EXISTS "posts" (
  "id" serial PRIMARY KEY NOT NULL,
  "author_id" integer NOT NULL,
  "title" varchar(200) NOT NULL,
  "slug" varchar(250) NOT NULL UNIQUE,
  "content" text NOT NULL,
  "excerpt" text,
  "is_published" boolean DEFAULT false NOT NULL,
  "published_at" timestamp,
  "created_at" timestamp DEFAULT now() NOT NULL,
  "updated_at" timestamp DEFAULT now() NOT NULL
);

CREATE TABLE IF NOT EXISTS "comments" (
  "id" serial PRIMARY KEY NOT NULL,
  "post_id" integer NOT NULL,
  "author_id" integer NOT NULL,
  "content" text NOT NULL,
  "is_approved" boolean DEFAULT false NOT NULL,
  "created_at" timestamp DEFAULT now() NOT NULL,
  "updated_at" timestamp DEFAULT now() NOT NULL
);

ALTER TABLE "posts" ADD CONSTRAINT "posts_author_id_users_id_fk" 
  FOREIGN KEY ("author_id") REFERENCES "users"("id") ON DELETE NO ACTION ON UPDATE NO ACTION;

ALTER TABLE "comments" ADD CONSTRAINT "comments_post_id_posts_id_fk" 
  FOREIGN KEY ("post_id") REFERENCES "posts"("id") ON DELETE NO ACTION ON UPDATE NO ACTION;

ALTER TABLE "comments" ADD CONSTRAINT "comments_author_id_users_id_fk" 
  FOREIGN KEY ("author_id") REFERENCES "users"("id") ON DELETE NO ACTION ON UPDATE NO ACTION;

The generated SQL is human-readable and editable. If the automatic generation produces something suboptimal, you can modify the migration file directly. This escape hatch is critical for production systems where you might need to add custom indexes, adjust constraint behavior, or include data migrations alongside schema changes. To apply pending migrations to your database, run:

npx drizzle-kit push:pg

The push command executes all unapplied migrations in order, tracking which migrations have run in a metadata table. For subsequent schema changes, the workflow remains the same: modify your schema file, run generate:pg to create a new migration, review the generated SQL, and run push:pg to apply it. Drizzle Kit stores migration state in your database, ensuring each migration runs exactly once even across multiple environments.

For teams managing multiple environments (development, staging, production), you'll want to integrate migration execution into your deployment pipeline rather than running push manually. Create a migration script that your CI/CD system can execute:

import { drizzle } from 'drizzle-orm/postgres-js';
import { migrate } from 'drizzle-orm/postgres-js/migrator';
import postgres from 'postgres';

const runMigrations = async () => {
  const connection = postgres(process.env.DATABASE_URL!, { max: 1 });
  const db = drizzle(connection);
  
  console.log('Running migrations...');
  await migrate(db, { migrationsFolder: './drizzle' });
  console.log('Migrations complete');
  
  await connection.end();
};

runMigrations().catch((err) => {
  console.error('Migration failed:', err);
  process.exit(1);
});

This script can be executed as part of your deployment process before starting the application, ensuring your database schema stays synchronized with your application code. The single-connection configuration (max: 1) is appropriate for migration scripts since you're running sequential DDL operations rather than concurrent queries.

Relations and Joins: Modeling Entity Relationships

While foreign keys enforce referential integrity at the database level, Drizzle's relations system enables type-safe joins and nested queries at the application level. Relations are defined separately from table schemas, creating a layer of metadata that Drizzle uses to construct complex queries. Add relation definitions to your schema file:

import { relations } from 'drizzle-orm';

export const usersRelations = relations(users, ({ many }) => ({
  posts: many(posts),
  comments: many(comments),
}));

export const postsRelations = relations(posts, ({ one, many }) => ({
  author: one(users, {
    fields: [posts.authorId],
    references: [users.id],
  }),
  comments: many(comments),
}));

export const commentsRelations = relations(comments, ({ one }) => ({
  post: one(posts, {
    fields: [comments.postId],
    references: [posts.id],
  }),
  author: one(users, {
    fields: [comments.authorId],
    references: [users.id],
  }),
}));

These relation definitions describe the cardinality and join conditions between tables. The one helper defines many-to-one relationships (many posts belong to one user), while many defines one-to-many relationships (one user has many posts). The fields and references arrays specify which columns to join on. Drizzle uses this metadata to construct efficient SQL joins when you request related data through its relational query API.

The separation between table definitions and relations is deliberate. Your core schema remains a clean description of database structure without mixing in application-level concerns. Relations are additive metadata that enhance query capabilities without affecting the underlying SQL schema. This design also means you can define different relation sets for different parts of your application if needed, though in practice most projects maintain a single canonical set of relations.

With relations defined, you can query nested data structures using Drizzle's relational query syntax:

import { db } from './connection';
import * as schema from './schema';

// Query posts with their authors and comments
export const getPostWithDetails = async (postId: number) => {
  const result = await db.query.posts.findFirst({
    where: (posts, { eq }) => eq(posts.id, postId),
    with: {
      author: true,
      comments: {
        with: {
          author: true,
        },
      },
    },
  });
  
  return result;
};

The db.query API provides a relational interface that leverages your relation definitions. The with clause specifies which relations to include, and Drizzle generates efficient SQL joins rather than separate queries. The return type is fully inferred: TypeScript knows the shape includes the post, the author object, and an array of comments each containing their own author object. This eliminates an entire class of runtime errors where you assume data is loaded but receive null references.

CRUD Patterns and Query Building

Drizzle provides two distinct query interfaces: a SQL-like builder for explicit control and a relational API for working with entities. Understanding when to use each approach is key to productive development. The SQL-like builder maps closely to actual SQL syntax and is ideal for operations where you need precise control over the generated query:

import { db } from '../connection';
import { users, type NewUser, type User } from '../schema';
import { eq, and, or, sql } from 'drizzle-orm';

export class UserRepository {
  // Create a new user
  async create(userData: NewUser): Promise<User> {
    const [user] = await db
      .insert(users)
      .values(userData)
      .returning();
    return user;
  }

  // Find user by email
  async findByEmail(email: string): Promise<User | undefined> {
    const [user] = await db
      .select()
      .from(users)
      .where(eq(users.email, email))
      .limit(1);
    return user;
  }

  // Update user
  async update(userId: number, updates: Partial<NewUser>): Promise<User> {
    const [user] = await db
      .update(users)
      .set({ ...updates, updatedAt: new Date() })
      .where(eq(users.id, userId))
      .returning();
    return user;
  }

  // Soft delete by deactivating
  async deactivate(userId: number): Promise<void> {
    await db
      .update(users)
      .set({ isActive: false, updatedAt: new Date() })
      .where(eq(users.id, userId));
  }

  // Complex search with multiple conditions
  async searchUsers(params: {
    searchTerm?: string;
    isActive?: boolean;
    createdAfter?: Date;
  }): Promise<User[]> {
    const conditions = [];

    if (params.searchTerm) {
      conditions.push(
        or(
          sql`${users.email} ILIKE ${`%${params.searchTerm}%`}`,
          sql`${users.username} ILIKE ${`%${params.searchTerm}%`}`,
          sql`${users.displayName} ILIKE ${`%${params.searchTerm}%`}`
        )
      );
    }

    if (params.isActive !== undefined) {
      conditions.push(eq(users.isActive, params.isActive));
    }

    if (params.createdAfter) {
      conditions.push(sql`${users.createdAt} > ${params.createdAfter}`);
    }

    return db
      .select()
      .from(users)
      .where(conditions.length > 0 ? and(...conditions) : undefined);
  }
}

The repository pattern encapsulates database operations and provides a clean API for your business logic layer. Each method uses Drizzle's builder syntax: select() for queries, insert().values() for creation, update().set() for modifications. The returning() clause instructs PostgreSQL to return the modified rows, which is particularly useful for getting auto-generated IDs and default values without a separate query. Type safety flows through every operation: userData must match the NewUser type, search results are typed as User[], and TypeScript catches attempts to query nonexistent columns or provide wrong types.

The searchUsers method demonstrates building dynamic queries with conditional logic. Rather than constructing SQL strings, you build arrays of condition expressions and combine them using and() and or(). The sql template literal provides an escape hatch for database-specific features like PostgreSQL's ILIKE operator while maintaining parameter binding to prevent SQL injection. This approach keeps your queries composable and testable without sacrificing type safety.

For more complex operations involving joins and aggregations, the SQL-like builder provides full control:

import { db } from '../connection';
import { posts, users, comments, type NewPost, type Post } from '../schema';
import { eq, desc, and, count, sql } from 'drizzle-orm';

export class PostRepository {
  // Get published posts with author info and comment counts
  async getPublishedPosts(limit: number = 20, offset: number = 0) {
    return db
      .select({
        post: posts,
        author: {
          id: users.id,
          username: users.username,
          displayName: users.displayName,
        },
        commentCount: count(comments.id),
      })
      .from(posts)
      .leftJoin(users, eq(posts.authorId, users.id))
      .leftJoin(comments, eq(comments.postId, posts.id))
      .where(eq(posts.isPublished, true))
      .groupBy(posts.id, users.id)
      .orderBy(desc(posts.publishedAt))
      .limit(limit)
      .offset(offset);
  }

  // Create post
  async create(postData: NewPost): Promise<Post> {
    const [post] = await db
      .insert(posts)
      .values(postData)
      .returning();
    return post;
  }

  // Publish a post
  async publish(postId: number): Promise<Post> {
    const [post] = await db
      .update(posts)
      .set({
        isPublished: true,
        publishedAt: new Date(),
        updatedAt: new Date(),
      })
      .where(eq(posts.id, postId))
      .returning();
    return post;
  }

  // Get post with full relational data using relational API
  async getPostWithDetails(postId: number) {
    return db.query.posts.findFirst({
      where: (posts, { eq }) => eq(posts.id, postId),
      with: {
        author: {
          columns: {
            passwordHash: false, // Exclude sensitive fields
          },
        },
        comments: {
          where: (comments, { eq }) => eq(comments.isApproved, true),
          with: {
            author: {
              columns: {
                id: true,
                username: true,
                displayName: true,
              },
            },
          },
          orderBy: (comments, { asc }) => [asc(comments.createdAt)],
        },
      },
    });
  }
}

The getPublishedPosts method showcases manual join construction with custom selection and aggregation. By explicitly selecting specific fields and using count(), you control exactly what data gets returned and how it's shaped. The .groupBy() clause is necessary when mixing aggregated columns (comment count) with non-aggregated columns. Compare this to the getPostWithDetails method, which uses the relational API to load nested entities. Both approaches have their place: use manual joins when you need aggregations or complex SQL features, and use the relational API when you're primarily loading related entities.

Migration workflows in production environments typically follow this pattern: developers create migrations locally using drizzle-kit generate:pg, commit the generated SQL to version control, and automated deployment processes apply migrations using the migrate script. For teams that prefer more control over migration execution, you can also write migrations by hand and place them in the drizzle directory following the naming convention 0000_migration_name.sql. Drizzle Kit will incorporate hand-written migrations into its tracking system and maintain proper ordering.

One critical consideration for production systems: Drizzle generates migrations based on schema diffs, but it cannot automatically handle complex transformations like splitting a column into multiple columns or changing data types that require casting. For these scenarios, you'll need to edit the generated migration or write a custom migration that includes the necessary data transformation logic alongside the schema changes. Always review generated migrations before applying them to production databases, and test them against a copy of production data when making structural changes.

Advanced Query Patterns and Transactions

Real applications require more than simple CRUD operations. Drizzle handles transactions, batch operations, and complex query construction through composable primitives. Transactions in Drizzle use a callback-based API that automatically handles commit and rollback:

import { db } from '../connection';
import { posts, comments, users, type NewPost } from '../schema';
import { eq } from 'drizzle-orm';

export class PostService {
  // Create post and notify mentioned users atomically
  async createPostWithNotifications(
    authorId: number,
    postData: Omit<NewPost, 'authorId'>,
    mentionedUserIds: number[]
  ) {
    return db.transaction(async (tx) => {
      // Insert the post
      const [post] = await tx
        .insert(posts)
        .values({ ...postData, authorId })
        .returning();

      // Create notification records (simplified example)
      if (mentionedUserIds.length > 0) {
        const notifications = mentionedUserIds.map(userId => ({
          userId,
          postId: post.id,
          type: 'mention' as const,
          createdAt: new Date(),
        }));
        
        // Would insert into notifications table if it existed
        // await tx.insert(notifications).values(notifications);
      }

      return post;
    });
  }

  // Batch update multiple posts
  async bulkPublish(postIds: number[]): Promise<void> {
    await db
      .update(posts)
      .set({
        isPublished: true,
        publishedAt: new Date(),
        updatedAt: new Date(),
      })
      .where(sql`${posts.id} = ANY(${postIds})`);
  }

  // Complex aggregation query
  async getAuthorStats(authorId: number) {
    const [stats] = await db
      .select({
        totalPosts: count(posts.id),
        publishedPosts: sql<number>`count(*) filter (where ${posts.isPublished} = true)`,
        totalComments: sql<number>`count(${comments.id})`,
        avgCommentsPerPost: sql<number>`round(count(${comments.id})::numeric / nullif(count(distinct ${posts.id}), 0), 2)`,
      })
      .from(posts)
      .leftJoin(comments, eq(comments.postId, posts.id))
      .where(eq(posts.authorId, authorId));

    return stats;
  }
}

The transaction method receives a callback with a transaction client (tx) that you use for all operations within the transaction scope. If any operation throws an error, Drizzle automatically rolls back the entire transaction. If the callback completes successfully, the transaction commits. This pattern prevents the common mistake of forgetting to handle rollback or commit calls explicitly. For read-only transactions where you need consistent snapshots of data, you can use the same pattern: the isolation level depends on your database configuration.

Batch operations like bulkPublish demonstrate using raw SQL for database-specific features. PostgreSQL's ANY operator combined with an array parameter efficiently handles multi-row updates without constructing enormous WHERE clauses. The sql template tag with type annotations (sql<number>) allows you to include arbitrary SQL expressions while maintaining type safety for the result. This flexibility means you're never blocked by Drizzle's abstraction: when you need a specific SQL feature, you can drop down to raw SQL for that portion of the query.

The getAuthorStats method shows PostgreSQL-specific aggregation using FILTER clauses and NULLIF for safe division. Drizzle's approach of embracing SQL rather than abstracting it away means you can leverage database-specific features when they provide value. The type annotations on sql<number> tell TypeScript what type to expect in the result object, preventing type errors downstream. This is particularly important for aggregations and calculations where the database returns numeric types that need explicit typing.

Performance Considerations and Common Pitfalls

Drizzle's lightweight design means you're responsible for performance optimization strategies that heavier ORMs might handle automatically (or poorly). The most common pitfall is the N+1 query problem when using the relational API without understanding its behavior. Consider this seemingly innocent code:

// DON'T: This generates N+1 queries
const users = await db.query.users.findMany();

for (const user of users) {
  const posts = await db.query.posts.findMany({
    where: (posts, { eq }) => eq(posts.authorId, user.id),
  });
  console.log(`${user.username}: ${posts.length} posts`);
}

Each iteration executes a separate query to fetch posts for that user. With 100 users, you've made 101 database round trips. The solution is to batch the work using joins or the relational API's with clause:

// DO: Single query with join
const usersWithPostCounts = await db
  .select({
    id: users.id,
    username: users.username,
    postCount: count(posts.id),
  })
  .from(users)
  .leftJoin(posts, eq(posts.authorId, users.id))
  .groupBy(users.id);

This executes a single query regardless of the number of users. Understanding the SQL that Drizzle generates is critical for identifying these issues. During development, enable query logging to see exactly what SQL runs:

import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';

const client = postgres(process.env.DATABASE_URL!);

export const db = drizzle(client, {
  logger: process.env.NODE_ENV === 'development',
});

With logging enabled, Drizzle prints every SQL query to the console, making it immediately obvious when you're generating unexpected queries. In production, you'd typically disable this or route it to a structured logging system, but during development it's an invaluable debugging tool.

Another performance consideration is index coverage. Drizzle doesn't automatically add indexes beyond primary keys and explicit unique constraints. For foreign keys and frequently queried columns, you'll want to define indexes in your schema:

import { pgTable, serial, text, varchar, timestamp, integer, boolean, index } from 'drizzle-orm/pg-core';

export const posts = pgTable('posts', {
  id: serial('id').primaryKey(),
  authorId: integer('author_id').notNull().references(() => users.id),
  title: varchar('title', { length: 200 }).notNull(),
  slug: varchar('slug', { length: 250 }).notNull().unique(),
  content: text('content').notNull(),
  isPublished: boolean('is_published').default(false).notNull(),
  publishedAt: timestamp('published_at'),
  createdAt: timestamp('created_at').defaultNow().notNull(),
}, (table) => ({
  authorIdx: index('posts_author_id_idx').on(table.authorId),
  publishedIdx: index('posts_published_idx').on(table.isPublished, table.publishedAt),
  slugIdx: index('posts_slug_idx').on(table.slug),
}));

The second parameter to pgTable accepts a callback that receives the table definition and returns an object of indexes. This syntax allows you to define multi-column indexes (composite indexes) and specify index types when needed. The posts_published_idx index covers queries that filter by published status and sort by publish date, a common pattern for public-facing feeds. When generating migrations, Drizzle Kit includes these index definitions in the SQL.

Connection pooling configuration deserves attention in production deployments. The postgres driver uses a default pool size that may not suit your load profile:

import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';

const client = postgres(process.env.DATABASE_URL!, {
  max: 20, // Maximum pool size
  idle_timeout: 20, // Idle connection timeout in seconds
  connect_timeout: 10, // Connection timeout in seconds
});

export const db = drizzle(client);

The max setting controls how many concurrent connections your application maintains. Setting this too low causes request queuing under load; setting it too high can overwhelm your database. A good starting point is 2-3 connections per CPU core on your database server, adjusted based on your specific query patterns and concurrent load. Monitor connection pool exhaustion and query latency in production to tune these values appropriately.

Production Best Practices and Type Safety Patterns

Building maintainable database layers requires more than just working queries. Structuring your schema, repository, and service layers to maximize type safety and testability pays dividends as projects scale. One powerful pattern is using Drizzle's prepared statements for queries executed frequently with different parameters:

import { db } from '../connection';
import { users, posts } from '../schema';
import { eq } from 'drizzle-orm';

// Prepare frequently-executed queries
export const findUserById = db
  .select()
  .from(users)
  .where(eq(users.id, sql.placeholder('id')))
  .prepare('find_user_by_id');

export const findPostsByAuthor = db
  .select()
  .from(posts)
  .where(eq(posts.authorId, sql.placeholder('authorId')))
  .prepare('find_posts_by_author');

// Usage
const user = await findUserById.execute({ id: 123 });
const authorPosts = await findPostsByAuthor.execute({ authorId: 123 });

Prepared statements are parsed once and reused with different parameter values, reducing query planning overhead on the database side. For queries executed thousands of times with varying parameters, this can provide measurable performance improvements. The sql.placeholder() function marks parameter positions, and the prepare() method gives the statement a name for database-side caching.

Schema evolution in long-lived projects requires careful consideration. When adding nullable columns or columns with defaults, the migration is straightforward. When adding non-nullable columns without defaults to tables containing data, you need a multi-step migration:

// Step 1: Add column as nullable
export const posts = pgTable('posts', {
  // ... existing columns
  viewCount: integer('view_count'), // Initially nullable
});

// After generating and applying migration, backfill data
// Step 2: Run data migration to set default values
// UPDATE posts SET view_count = 0 WHERE view_count IS NULL;

// Step 3: Modify schema to make it non-nullable with default
export const posts = pgTable('posts', {
  // ... existing columns
  viewCount: integer('view_count').default(0).notNull(),
});

This three-step process ensures zero downtime: the column is added without breaking existing code, data is backfilled, and then the constraint is tightened. Generate a separate migration for each step and apply them in sequence. For large tables, the backfill step might need to process rows in batches to avoid locking the table for extended periods.

Environment-specific database configuration often requires different connection strategies. Local development might use direct connections, while production uses connection pooling with SSL:

import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';

const isDevelopment = process.env.NODE_ENV === 'development';
const isProduction = process.env.NODE_ENV === 'production';

const client = postgres(process.env.DATABASE_URL!, {
  max: isProduction ? 20 : 5,
  ssl: isProduction ? { rejectUnauthorized: true } : false,
  idle_timeout: isProduction ? 20 : null,
  connect_timeout: isProduction ? 10 : 30,
  onnotice: isDevelopment ? console.log : undefined,
});

export const db = drizzle(client, {
  logger: isDevelopment,
});

// Graceful shutdown
export const closeDatabase = async () => {
  await client.end();
};

This configuration adjusts pooling, SSL, timeouts, and logging based on environment. The onnotice callback captures PostgreSQL notices (like deprecated features or suggestions) during development. The closeDatabase function should be called during application shutdown to drain the connection pool gracefully and allow in-flight queries to complete.

Testing Your Database Layer

Testing database code requires isolating tests from each other and managing schema state. Drizzle's migration system integrates naturally with test setup. For unit tests of repository methods, use a test database that gets reset between runs:

import { describe, it, beforeEach, afterAll, expect } from 'vitest';
import { migrate } from 'drizzle-orm/postgres-js/migrator';
import postgres from 'postgres';
import { drizzle } from 'drizzle-orm/postgres-js';
import { UserRepository } from '../user-repository';
import { users } from '../../schema';

const testClient = postgres(process.env.TEST_DATABASE_URL!);
const testDb = drizzle(testClient);

beforeEach(async () => {
  // Apply migrations to test database
  await migrate(testDb, { migrationsFolder: './drizzle' });
  
  // Clean all data between tests
  await testDb.delete(users);
});

afterAll(async () => {
  await testClient.end();
});

describe('UserRepository', () => {
  it('creates and retrieves a user', async () => {
    const repo = new UserRepository();
    
    const newUser = await repo.create({
      email: 'test@example.com',
      username: 'testuser',
      passwordHash: 'hashed_password',
      displayName: 'Test User',
    });

    expect(newUser.id).toBeDefined();
    expect(newUser.email).toBe('test@example.com');

    const foundUser = await repo.findByEmail('test@example.com');
    expect(foundUser).toBeDefined();
    expect(foundUser?.id).toBe(newUser.id);
  });

  it('enforces unique constraints', async () => {
    const repo = new UserRepository();
    
    await repo.create({
      email: 'test@example.com',
      username: 'testuser',
      passwordHash: 'hashed_password',
    });

    await expect(
      repo.create({
        email: 'test@example.com', // Duplicate email
        username: 'differentuser',
        passwordHash: 'hashed_password',
      })
    ).rejects.toThrow();
  });
});

The test setup runs migrations before each test to ensure a clean schema, then deletes all data to provide isolation between tests. This approach is slower than mocking but tests against real database behavior including constraints, triggers, and data types. For projects with large test suites, consider using database transactions for test isolation: begin a transaction before each test, run the test, and roll back instead of committing. This is faster than deleting all data but requires more sophisticated setup.

Integration tests that verify your database layer works with your application logic should use factories or fixtures to create test data:

import { db } from '../../connection';
import { users, type NewUser } from '../../schema';

export const createTestUser = async (overrides: Partial<NewUser> = {}) => {
  const userData: NewUser = {
    email: `user_${Date.now()}@example.com`,
    username: `user_${Date.now()}`,
    passwordHash: 'test_hash',
    displayName: 'Test User',
    ...overrides,
  };

  const [user] = await db.insert(users).values(userData).returning();
  return user;
};

export const createTestUsers = async (count: number) => {
  const usersData = Array.from({ length: count }, (_, i) => ({
    email: `user_${Date.now()}_${i}@example.com`,
    username: `user_${Date.now()}_${i}`,
    passwordHash: 'test_hash',
    displayName: `Test User ${i}`,
  }));

  return db.insert(users).values(usersData).returning();
};

Fixtures provide reusable test data creation while maintaining type safety. The timestamp-based email and username ensure uniqueness without manual coordination between tests. In larger projects, consider using libraries like Faker or Fishery to generate realistic test data that exercises edge cases and data validation logic.

Handling Migrations in Team Environments

When multiple developers work on the same codebase, migration conflicts become a coordination challenge. Unlike application code where merge conflicts are syntactic, database migrations have semantic dependencies: the order of migrations matters. Drizzle's sequential numbering system (0000, 0001, 0002) makes conflicts obvious. When two developers create migrations simultaneously on different branches, both might generate a migration with the same number.

The resolution strategy depends on your branching model. For trunk-based development where features merge to main frequently, the developer merging second should regenerate their migration after pulling the other's changes. For long-lived feature branches, you might need to coordinate migration numbers or use a timestamp-based naming convention. Some teams adopt a convention where each developer is assigned a range of migration numbers, though this becomes unwieldy with larger teams.

A more robust approach for teams is to treat migrations as immutable once merged to the main branch. If you need to modify a schema after the fact, create a new migration that alters the structure rather than editing the previous migration. This prevents situations where developers have applied a migration in one form locally, but the migration file changes after a force-push or rebase:

// Migration 0003: Add email_verified column (already applied)
// ALTER TABLE users ADD COLUMN email_verified boolean DEFAULT false;

// Migration 0004: Change default for email_verified (new migration)
// ALTER TABLE users ALTER COLUMN email_verified SET DEFAULT true;
// UPDATE users SET email_verified = true WHERE created_at < '2026-01-01';

This immutable migration approach means your migration history reflects the actual evolution of your schema, making it easier to understand when and why changes occurred. For development databases that don't contain production data, you can occasionally squash migrations by deleting all migration files, dropping your development database, and regenerating a single initial migration from your current schema. This keeps migration counts manageable in long-running projects.

Connection Management and Shutdown Handling

Applications that don't gracefully handle database connection lifecycle can cause connection leaks, preventing clean shutdowns and exhausting connection pools. Drizzle itself doesn't manage connection lifecycle; that's handled by the underlying driver. For the postgres driver, implement proper shutdown handling:

import express from 'express';
import { db, closeDatabase } from './db/connection';

const app = express();
const PORT = process.env.PORT || 3000;

// ... route definitions

const server = app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});

// Graceful shutdown
const shutdown = async (signal: string) => {
  console.log(`${signal} received, starting graceful shutdown`);
  
  server.close(async () => {
    console.log('HTTP server closed');
    
    try {
      await closeDatabase();
      console.log('Database connections closed');
      process.exit(0);
    } catch (err) {
      console.error('Error during shutdown:', err);
      process.exit(1);
    }
  });

  // Force shutdown after timeout
  setTimeout(() => {
    console.error('Forced shutdown after timeout');
    process.exit(1);
  }, 10000);
};

process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));

This pattern ensures your application closes database connections before terminating, preventing orphaned connections that can accumulate and exhaust your database's connection limit. The timeout provides a backstop in case the graceful shutdown hangs. For serverless environments where process lifecycle is managed by the platform, connection management strategies differ: you might use HTTP-based database connections like PgBouncer or accept that some connections will be forcibly terminated.

Trade-offs and When Not to Use Drizzle

Drizzle's SQL-centric design makes it an excellent choice for teams comfortable with relational databases and SQL, but this same characteristic makes it less suitable for certain scenarios. If your team lacks SQL experience, the learning curve might be steeper than with higher-level ORMs that abstract more aggressively. Drizzle requires understanding joins, indexes, transaction isolation, and SQL-specific features to use effectively. For projects where the database is purely a persistence mechanism and complex queries are rare, the additional type safety might not justify the learning investment compared to simpler solutions.

The relational query API, while convenient, can obscure performance characteristics if you don't verify the generated SQL. Unlike Prisma's explicit include behavior or TypeORM's query builder, Drizzle's with clause makes it less obvious when you're triggering additional joins. Teams must establish practices of reviewing generated queries and monitoring performance, particularly when building features that load nested relations. This transparency is a feature for experienced teams but can be a pitfall for those who expect the ORM to handle optimization automatically.

Drizzle's migration system works well for straightforward schema changes but lacks some advanced features found in dedicated migration tools like Flyway or Liquibase. There's no built-in support for migration versioning schemes beyond sequential numbers, no automatic generation of rollback migrations, and limited support for data migrations that require complex transformations. For projects with extremely complex migration requirements, such as migrating from one schema design to another while maintaining uptime, you might supplement Drizzle Kit with custom migration scripts or external tools.

The ecosystem around Drizzle is still maturing compared to established ORMs. While the core library is stable and production-ready, you'll find fewer third-party extensions, integrations, and community plugins. If your stack relies heavily on ORM integrations (like GraphQL schema generation from ORM models, automatic API generation, or admin panel builders), you may need to build these integrations yourself. For teams that value control and minimalism, this blank canvas is an advantage. For teams that prefer comprehensive frameworks with extensive plugins, it's a limitation.

Extending Drizzle with Custom Types and Behaviors

Real applications often need to map database types to application-specific representations. Drizzle's custom type system allows you to define bidirectional transformations between database columns and TypeScript types:

import { customType } from 'drizzle-orm/pg-core';

// Custom type for JSON fields with type safety
export const jsonb = <T>() =>
  customType<{ data: T; driverData: string }>({
    dataType() {
      return 'jsonb';
    },
    toDriver(value: T): string {
      return JSON.stringify(value);
    },
    fromDriver(value: string): T {
      return JSON.parse(value);
    },
  });

// Custom type for encrypted fields
export const encrypted = customType<{ data: string; driverData: string }>({
  dataType() {
    return 'text';
  },
  toDriver(value: string): string {
    // Your encryption logic here
    return encryptString(value);
  },
  fromDriver(value: string): string {
    // Your decryption logic here
    return decryptString(value);
  },
});

Custom types encapsulate transformation logic so it's applied automatically during queries. The jsonb helper provides type-safe JSON columns where you specify the expected structure. The encrypted type demonstrates transparent encryption: values are encrypted when written to the database and decrypted when read, without application code needing to handle the transformation explicitly.

Using these custom types in your schema:

import { pgTable, serial, text } from 'drizzle-orm/pg-core';
import { jsonb, encrypted } from './custom-types';

interface UserPreferences {
  theme: 'light' | 'dark';
  notifications: boolean;
  language: string;
}

export const users = pgTable('users', {
  id: serial('id').primaryKey(),
  email: varchar('email', { length: 255 }).notNull(),
  // Type-safe JSON column
  preferences: jsonb<UserPreferences>().$type<UserPreferences>().notNull().default({
    theme: 'light',
    notifications: true,
    language: 'en',
  }),
  // Transparent encryption
  apiKey: encrypted('api_key'),
});

The preferences column is typed as UserPreferences, so TypeScript enforces the correct shape when inserting or updating, and query results are automatically typed correctly. This pattern is particularly valuable for configuration objects, metadata, or any semi-structured data that fits JSON's data model but needs stronger typing than generic objects.

Observability and Query Monitoring

Production database layers need visibility into query performance and error rates. While Drizzle's logging option helps during development, production systems require structured telemetry. Implement query timing and error tracking by wrapping database operations:

import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';

const client = postgres(process.env.DATABASE_URL!);

// Custom logger that tracks metrics
const queryLogger = {
  logQuery(query: string, params: unknown[]) {
    const startTime = Date.now();
    
    // Log query start
    console.log('[DB Query]', {
      sql: query,
      timestamp: new Date().toISOString(),
    });

    // Return timing function
    return () => {
      const duration = Date.now() - startTime;
      
      // Send metrics to your monitoring system
      // metrics.histogram('db.query.duration', duration, { query_type: extractQueryType(query) });
      
      if (duration > 1000) {
        console.warn('[DB Slow Query]', {
          sql: query,
          duration,
          params,
        });
      }
    };
  },
};

export const db = drizzle(client, {
  logger: process.env.NODE_ENV !== 'production' && queryLogger,
});

This instrumentation captures slow queries and provides hooks for sending metrics to monitoring systems like Prometheus, Datadog, or CloudWatch. The extractQueryType function (which you'd implement) categorizes queries as SELECT, INSERT, UPDATE, or DELETE for more granular metrics. For production systems, you'd likely use OpenTelemetry or your monitoring provider's SDK to emit traces and metrics rather than console logging.

Error handling in database operations should distinguish between transient failures (connection timeouts, deadlocks) and permanent failures (constraint violations, missing tables). Wrap your repository calls in service methods that implement appropriate retry logic:

export const withRetry = async <T>(
  operation: () => Promise<T>,
  maxRetries: number = 3,
  delayMs: number = 100
): Promise<T> => {
  let lastError: Error;

  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      return await operation();
    } catch (error) {
      lastError = error as Error;

      // Don't retry on constraint violations or logical errors
      if (isNonRetryableError(error)) {
        throw error;
      }

      if (attempt < maxRetries) {
        await new Promise(resolve => setTimeout(resolve, delayMs * Math.pow(2, attempt)));
        continue;
      }
    }
  }

  throw lastError!;
};

const isNonRetryableError = (error: any): boolean => {
  // Check for PostgreSQL error codes that shouldn't be retried
  const nonRetryableCodes = [
    '23505', // unique_violation
    '23503', // foreign_key_violation
    '23502', // not_null_violation
    '42P01', // undefined_table
  ];

  return error.code && nonRetryableCodes.includes(error.code);
};

This retry mechanism with exponential backoff handles transient network issues and deadlocks while immediately failing on constraint violations that won't resolve with retries. The PostgreSQL error codes allow you to categorize failures precisely. Similar error codes exist for other database systems.

Key Takeaways

Here are five practical steps to implement immediately when adopting Drizzle ORM:

  1. Enable query logging in development to build intuition about the SQL Drizzle generates from your queries, particularly when using the relational API with nested with clauses.
  2. Define TypeScript types from your schema using InferSelectModel and InferInsertModel rather than maintaining separate type definitions, ensuring your types and database structure never drift.
  3. Review every generated migration before applying it to catch unexpected schema changes and add necessary data transformations or custom indexes that Drizzle Kit doesn't auto-generate.
  4. Use prepared statements for queries executed frequently in hot paths, and monitor query performance in production to identify opportunities for indexing or query optimization.
  5. Structure your code with repository patterns that encapsulate database operations, making your business logic testable and independent of specific query implementations.

Analogies and Mental Models

Think of Drizzle as a type-safe translator standing between your TypeScript code and your database, rather than a mediator that maintains its own understanding of your data model. Traditional ORMs are like interpreters who understand both languages and translate freely, sometimes adding their own interpretations. Drizzle is more like a translator who works from a phrasebook (your schema) that both sides agree on, ensuring the translation is faithful and transparent. When you ask for something that doesn't have a direct translation, Drizzle doesn't improvise—it asks you to provide the SQL.

The schema-to-types inference can be understood through the lens of a contract. Your schema is a formal contract describing what the database stores. InferSelectModel and InferInsertModel are lawyers who read that contract and tell the rest of your application what clauses they need to uphold. When the contract changes (schema modification), the lawyers immediately notify everyone affected (TypeScript errors), preventing anyone from unknowingly violating the new terms. This compile-time enforcement is far cheaper than discovering contract violations at runtime in production.

80/20 Insight: The Core Workflow That Matters

Eighty percent of your work with Drizzle follows a simple cycle: define schema, generate migration, apply migration, write queries. Master these four steps and you've unlocked most of Drizzle's value. The remaining twenty percent—custom types, prepared statements, complex transactions, and advanced query patterns—becomes relevant as your application scales, but you can defer learning these until specific needs arise.

Within query construction, twenty percent of Drizzle's API handles eighty percent of use cases: select().from().where() for reads, insert().values().returning() for creates, update().set().where().returning() for modifications, and delete().where() for removals. The relational API's findFirst and findMany with where and with clauses cover most entity fetching scenarios. Focus on these patterns first, and consult documentation for specialized features like aggregations, subqueries, or database-specific functions only when you encounter the need.

The most impactful optimization for most applications isn't learning advanced Drizzle features—it's understanding your database's query planner and ensuring appropriate indexes exist. A properly indexed database with straightforward Drizzle queries will outperform sophisticated ORM abstractions on an under-indexed database every time. Spend your learning energy on database fundamentals, and let Drizzle stay out of your way.

Conclusion

Drizzle ORM provides a pragmatic middle ground for TypeScript applications that need type safety without sacrificing SQL transparency or runtime performance. By treating your schema as the single source of truth and inferring types at compile time, it eliminates the category of runtime errors that occur when database structure and application types drift apart. The migration workflow integrates naturally with version control and deployment pipelines, while the query builders provide just enough abstraction to prevent SQL injection and catch type errors without obscuring the underlying database operations.

The quickstart implementation demonstrated here—defining schemas with relations, establishing a migration workflow, implementing repository patterns, and structuring queries for type safety—provides a foundation for production applications. As your project grows, you'll extend these patterns with custom types, prepared statements, and monitoring instrumentation, but the core workflow remains constant. The trade-off of increased responsibility for performance optimization and SQL knowledge is deliberate: Drizzle targets teams that prefer control and predictability over convenience and magic.

For developers evaluating ORMs, Drizzle offers compelling benefits if your project already uses TypeScript, your team has SQL competency, and you value minimal abstractions over comprehensive frameworks. It's particularly well-suited for API services, backend systems, and applications where database performance directly impacts user experience. The investment in learning Drizzle's patterns pays off through faster debugging, easier optimization, and the confidence that comes from understanding exactly what queries your application executes.

References