About us Guides Projects Contacts
Админка
please wait

Problem Statement

You need to evolve your database schema over time while maintaining zero downtime, ensuring data integrity, and supporting rollback capabilities.

Migration Principles

PrincipleDescription
Backward CompatibleThe old application version works with the new schema.
Forward CompatibleThe new application version works with the old schema (during rollout).
ReversibleEvery migration has a rollback script.
IdempotentRunning a migration multiple times has the same result.
Small & IncrementalPrefer many small migrations over big-bang changes.

Safe Migration Patterns

1. Adding a Column (Safe)

-- Migration: add_email_verified_column
-- Safe: New column with a default value; no reads depend on it yet.
ALTER TABLE users ADD COLUMN email_verified BOOLEAN DEFAULT false;
-- Application code can be deployed before or after.
-- Old code ignores the new column; new code uses it.

2. Renaming a Column (Multi-Step)

Step 1: Add new column

-- Migration 001: add_full_name_column
ALTER TABLE users ADD COLUMN full_name VARCHAR(255);

Step 2: Backfill data

-- Migration 002: backfill_full_name
UPDATE users SET full_name = CONCAT(first_name, ' ', last_name) WHERE full_name IS NULL;

Step 3: Deploy code that writes to both and reads from the new column

// Code version 2: Dual write
class User extends Model
{
protected $fillable = ['first_name', 'last_name', 'full_name'];
public function save(array $options = [])
{
// Write to both during transition.
if ($this->isDirty('first_name') || $this->isDirty('last_name')) {
$this->full_name = $this->first_name . ' ' . $this->last_name;
}
return parent::save($options);
}
public function getDisplayNameAttribute()
{
// Read from the new column; fall back to the old.
return $this->full_name ?? ($this->first_name . ' ' . $this->last_name);
}
}

Step 4: Remove old columns (after all pods are on the new code)

-- Migration 003: remove_old_name_columns (run after full deployment)
ALTER TABLE users DROP COLUMN first_name;
ALTER TABLE users DROP COLUMN last_name;

3. Adding NOT NULL Constraint (Multi-Step)

-- Step 1: Add column as nullable.
ALTER TABLE orders ADD COLUMN customer_id INTEGER;
-- Step 2: Backfill existing rows.
UPDATE orders SET customer_id = (
SELECT id FROM customers WHERE customers.email = orders.customer_email
);
-- Step 3: Add constraint (after backfill is complete).
ALTER TABLE orders ALTER COLUMN customer_id SET NOT NULL;
-- Step 4: Add foreign key.
ALTER TABLE orders ADD CONSTRAINT fk_orders_customer
FOREIGN KEY (customer_id) REFERENCES customers(id);

4. Dropping a Column (Safe Sequence)

// Step 1: Stop reading the column in code.
// Remove all SELECT queries that include the column.
// Step 2: Stop writing to the column.
// Remove all INSERT/UPDATE statements that set the column.
// Step 3: Deploy and verify no errors.
// Step 4: Drop the column (migration).
-- Migration: drop_legacy_status_column
-- Only run after code changes are deployed.
ALTER TABLE orders DROP COLUMN legacy_status;

Framework-Specific Migration Tools

Laravel Migrations

// database/migrations/2024_01_15_create_orders_table.php
class CreateOrdersTable extends Migration
{
public function up()
{
Schema::create('orders', function (Blueprint $table) {
$table->id();
$table->foreignId('user_id')->constrained()->onDelete('cascade');
$table->string('status')->default('pending');
$table->decimal('total', 10, 2);
$table->timestamps();
$table->softDeletes();
$table->index(['user_id', 'status']);
});
}
public function down()
{
Schema::dropIfExists('orders');
}
}
// Run migrations.
php artisan migrate
// Rollback last batch.
php artisan migrate:rollback
// Rollback specific steps.
php artisan migrate:rollback --step=3

Ruby on Rails Migrations

# db/migrate/20240115_add_tracking_to_orders.rb
class AddTrackingToOrders < ActiveRecord::Migration[7.0]
# Disable transaction for large tables.
disable_ddl_transaction!
def change
# Add column with concurrent index.
add_column :orders, :tracking_number, :string
add_index :orders, :tracking_number,
algorithm: :concurrently,
if_not_exists: true
end
end
# Run migrations.
rails db:migrate
# Rollback.
rails db:rollback STEP=1

Django Migrations

# orders/migrations/0002_add_tracking.py
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('orders', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='order',
name='tracking_number',
field=models.CharField(max_length=100, null=True, blank=True),
),
migrations.AddIndex(
model_name='order',
index=models.Index(fields=['tracking_number'], name='tracking_idx'),
),
]
# Run migrations.
python manage.py migrate
# Create migration from model changes.
python manage.py makemigrations
# Rollback.
python manage.py migrate orders 0001

Large Table Migrations

Online Schema Change with pt-online-schema-change (MySQL)

# Add a column to a large table without locking.
pt-online-schema-change \
--alter "ADD COLUMN new_field VARCHAR(255)" \
--execute \
--host=localhost \
--user=root \
--ask-pass \
D=mydb,t=large_table

PostgreSQL Concurrent Index

-- Create an index without blocking writes.
CREATE INDEX CONCURRENTLY idx_orders_created ON orders(created_at);
-- If it fails, drop the invalid index first.
DROP INDEX CONCURRENTLY IF EXISTS idx_orders_created;

Batch Updates for Large Tables

// Laravel batch update
class BackfillUserSettings extends Migration
{
public function up()
{
// Process in batches to avoid memory issues.
User::query()
->whereNull('settings')
->chunkById(1000, function ($users) {
foreach ($users as $user) {
$user->update(['settings' => json_encode(['theme' => 'light'])]);
}
});
}
}
# Rails batch update
class BackfillUserSettings < ActiveRecord::Migration[7.0]
disable_ddl_transaction!
def up
User.where(settings: nil).find_in_batches(batch_size: 1000) do |batch|
User.where(id: batch.map(&:id)).update_all(settings: '{"theme": "light"}')
sleep(0.1) # Reduce database load.
end
end
end

Migration in CI/CD Pipeline

GitLab CI Migration Job

migrate-database:
stage: deploy
image: myapp:${CI_COMMIT_SHA}
script:
# Check for pending migrations.
- php artisan migrate:status
# Run migrations.
- php artisan migrate --force
# Verify migration.
- php artisan migrate:status | grep -q "No pending migrations"
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
changes:
- database/migrations/**/*
environment:
name: production

Kubernetes Job for Migrations

apiVersion: batch/v1
kind: Job
metadata:
name: db-migration-${CI_COMMIT_SHA}
namespace: production
spec:
backoffLimit: 0
template:
spec:
restartPolicy: Never
initContainers:
- name: wait-for-db
image: busybox
command: ['sh', '-c', 'until nc -z postgres 5432; do sleep 2; done']
containers:
- name: migrate
image: myapp:${CI_COMMIT_SHA}
command: ["php", "artisan", "migrate", "--force"]
envFrom:
- secretRef:
name: app-secrets

Migration Rollback Strategies

Application-Level Rollback Support

// Feature flag for rollback
class OrderController extends Controller
{
public function store(Request $request)
{
$order = new Order();
$order->user_id = auth()->id();
$order->total = $request->total;
// New field, with feature flag.
if (Feature::active('new-order-tracking')) {
$order->tracking_number = $this->generateTracking();
}
$order->save();
return response()->json($order);
}
}

Database Snapshot Before Migration

#!/bin/bash
# pre-migration-backup.sh
set -e
BACKUP_NAME="pre-migration-$(date +%Y%m%d-%H%M%S)"
# Create snapshot.
pg_dump $DATABASE_URL > /tmp/${BACKUP_NAME}.sql
# Upload to S3.
aws s3 cp /tmp/${BACKUP_NAME}.sql s3://db-backups/migrations/${BACKUP_NAME}.sql
# Store backup name for potential rollback.
echo $BACKUP_NAME > /tmp/last-migration-backup
echo "Backup created: $BACKUP_NAME"

Migration Testing

Test Migrations in CI

test-migrations:
stage: test
services:
- postgres:15
variables:
DATABASE_URL: postgres://test:test@postgres:5432/test
script:
# Apply all migrations.
- php artisan migrate --force
# Run down migrations.
- php artisan migrate:rollback --step=999
# Run up again.
- php artisan migrate --force
# Seed and verify.
- php artisan db:seed
- php artisan test --filter=DatabaseTest

Shadow Database Testing

test-migration-on-copy:
stage: pre-deploy
script:
# Create a copy of the production database.
- pg_dump $PROD_DATABASE_URL | psql $SHADOW_DATABASE_URL
# Run migration on shadow.
- DATABASE_URL=$SHADOW_DATABASE_URL php artisan migrate --force
# Run smoke tests.
- DATABASE_URL=$SHADOW_DATABASE_URL php artisan test --filter=SmokeTest
# Drop shadow database.
- psql $SHADOW_DATABASE_URL -c "DROP DATABASE shadow_db"

Migration Checklist

Before Migration

  • [ ] Confirm data ownership and dependencies
  • [ ] Migration tested in a staging environment with production-like data
  • [ ] Rollback script tested
  • [ ] Database backup created and verified with a restore drill
  • [ ] Team notified of the migration window
  • [ ] Monitoring dashboards ready

During Migration

  • [ ] Monitor database CPU and I/O
  • [ ] Monitor application error rates
  • [ ] Monitor query latency
  • [ ] Monitor lock wait times
  • [ ] Check replication lag (if applicable)

After Migration

  • [ ] Verify migration completed successfully
  • [ ] Run smoke tests
  • [ ] Check application logs for errors
  • [ ] Run reconciliation checks (compare record counts, sums, or hashes)
  • [ ] Update documentation
  • [ ] Clean up old columns/tables (after grace period)

The Senior Database Mindset

Senior engineers treat databases as critical infrastructure. They invest in:

  • Backups and recovery drills - Never run a major migration without a verified backup
  • Data ownership clarity - Before changing a database, define which system owns each dataset
  • Safe schema evolution patterns - Additive, backward-compatible steps with rollback points
  • Reconciliation checks - Automated comparison of record counts and data integrity

Common Pitfalls to Avoid

  • Running large migrations without tested backups
  • Introducing breaking schema changes without versioning
  • Ignoring replication lag during heavy backfills
  • Adding indexes without validating query impact
  • Failing to document migration steps

Treat migrations as production changes, not one-off scripts. When database operations are predictable and safe, the entire system becomes easier to evolve.

Related Wiki Articles

 
 
 
Языки
Темы
Copyright © 1999 — 2026
ZK Interactive