How to Review a SQL Schema Change in 5 Minutes
Your pull request is approved. The migration looks simple โ just adding a column and an index. You merge it. Ten minutes later, production is down.
The column was NOT NULL without a default. The index creation locked the table for 90 seconds. And nobody caught it because the schema change was "too small to review carefully."
This is why every engineering team needs a fast, repeatable SQL schema review process. Not a 45-page document. A 5-minute checklist that catches the dangerous stuff before it ships.
In this guide, we'll walk through a practical workflow you can use today โ whether you're reviewing a migration PR or sanity-checking your own changes before deploy.
Why schema review matters
Code review is standard practice. Schema review rarely is. Yet schema changes are more dangerous than code changes because:
- They're harder to roll back โ altering a table with millions of rows is not instantaneous
- They can lock tables, blocking writes and causing cascading timeouts
- Application code can be deployed in seconds; schema changes can take minutes or hours
- A bad schema change can corrupt data, not just break functionality
The good news: most catastrophic schema changes are obvious if you know what to look for. You don't need to be a database expert. You need a checklist.
The 5-minute schema review workflow
Step 1: Get the before and after schema 1 min
You can't review what you can't see. Start by generating the current schema and the proposed schema.
If you're using a migration framework (Liquibase, Flyway, Alembic):
- Dump the current schema from your target environment (staging or production)
- Apply the migration to a fresh database, then dump the resulting schema
If you're working with raw SQL:
- Copy the current
CREATE TABLEstatements from your database - Apply the migration script manually to get the new schema
Paste both schemas into a diff tool. Instantly, you'll see exactly what changed โ not just the migration file, but the semantic difference in the resulting schema.
Step 2: Look for destructive changes 1 min
These are the fastest way to break production. Scan for:
- Dropped tables or columns โ Is anything reading from them?
- Renamed columns โ Do views, triggers, or application queries reference the old name?
- Dropped indexes โ Are they used by critical queries?
- Changed column types โ Will existing data fit? (e.g.,
VARCHAR(100) โ VARCHAR(50))
"Dropped columns are the easiest change to approve and the hardest mistake to undo."
Step 3: Check constraint safety 1 min
Constraints are silent killers on large tables:
- NOT NULL without DEFAULT โ On PostgreSQL, this rewrites the entire table. On large tables, it can lock for minutes.
- UNIQUE constraints โ Adding these requires a full table scan and can fail if duplicates exist.
- CHECK constraints โ These validate all existing rows. If any row violates the new rule, the migration fails.
- Foreign keys โ Adding a FK on a large table creates an index and validates every row. Both operations are expensive.
Safe pattern for NOT NULL:
- Add the column as nullable
- Backfill with a script or batch update
- Add the NOT NULL constraint in a separate, small migration
Step 4: Verify index coverage 1 min
New foreign keys need backing indexes. Missing indexes on FK columns cause full table scans on joins โ and those get expensive fast as tables grow.
Ask yourself:
- Does every new foreign key have an index?
- Are any indexes being dropped that queries depend on?
- Are new indexes on columns that will have high cardinality?
Step 5: Estimate the migration cost 1 min
Not all migrations are equal. A column rename on a 10-row table is instant. On a 10-million-row table, it can take hours.
Before approving, estimate:
- Table size โ How many rows? What's the row width?
- Lock duration โ Will the migration lock the table? For how long?
- Rollback complexity โ Can you undo this if something goes wrong?
If a migration is high-risk and high-duration, break it into smaller steps or run it during a maintenance window.
Make it automatic
The best schema review processes don't rely on human memory. They use automation:
- CI/CD gating: Run a schema diff on every migration PR. Fail the build if destructive changes are detected.
- Breaking change detection: Use a tool that scores migration risk and flags dangerous patterns automatically.
- Pre-deploy diffs: Compare staging vs production schemas before every release to catch drift.
SchemaLens can do all three. It runs in your browser for quick checks, in your terminal via npx schemalens-cli, and in CI via GitHub Actions or GitLab CI.
โก Review your next schema change in 5 minutes
Paste your before and after schemas into SchemaLens and catch breaking changes before they reach production.
Start Reviewing โThe one-question test
If you only have time for one question before approving a schema change, make it this:
"If this migration fails halfway through, what's the worst-case state of the database?"
If the answer makes you uncomfortable, the migration needs more review.