n8n PostgreSQL Setup: Complete Database Configuration Guide
Your n8n database is a ticking time bomb. If you’re running production workflows on the default SQLite database, you’re one concurrent execution away from corrupted data and lost credentials.
SQLite works fine for testing. It’s simple, requires zero configuration, and comes bundled with n8n. But the moment you move beyond personal experiments, SQLite’s limitations become dangerous. Multiple workflows writing simultaneously? Corrupted database. Trying to back up while n8n runs? Inconsistent data. Need queue mode for scaling? Not supported.
PostgreSQL eliminates these risks entirely. It handles concurrent connections properly, supports safe hot backups, enables queue mode for distributed workers, and scales with your workload. The migration takes 30 minutes and prevents catastrophic failures.
The SQLite Limitation
SQLite stores everything in a single file. Every read and write locks that file. When two workflows try to write execution data at the same moment, one of them fails. Sometimes silently. Sometimes with corruption that you won’t notice until weeks later when you can’t load your workflows.
The n8n community forums are full of horror stories: months of workflow development gone because a SQLite file corrupted during a high-traffic period. Credentials lost because a backup captured a partial write. Queue mode failing mysteriously because SQLite can’t handle the concurrent worker connections.
What PostgreSQL Changes
PostgreSQL uses row-level locking instead of file-level locking. Multiple workers can write to different rows simultaneously without conflict. Your workflows execute in parallel without stepping on each other.
Beyond concurrency, PostgreSQL gives you:
- Point-in-time recovery: Roll back to any moment before a problem occurred
- Replication: Keep a hot standby ready for failover
- Connection pooling: Handle hundreds of concurrent connections efficiently
- Online backups: Back up without stopping n8n or risking data inconsistency
What You’ll Learn
- Why PostgreSQL is required for production n8n deployments
- Fresh PostgreSQL setup with Docker Compose
- Connecting to managed databases (AWS RDS, Supabase, DigitalOcean)
- SSL configuration for secure database connections
- Step-by-step SQLite to PostgreSQL migration
- Performance tuning for high-volume workflows
- Backup strategies and disaster recovery
- Troubleshooting common connection and configuration errors
Why PostgreSQL Over SQLite
The decision isn’t about preference. It’s about what your deployment actually requires.
Queue Mode Requires PostgreSQL
If you’re planning to scale n8n with queue mode, PostgreSQL is mandatory. SQLite doesn’t support the concurrent access patterns that multiple workers require. The main instance and workers all need to read and write execution data simultaneously. SQLite’s file-level locking makes this impossible.
Even without queue mode, any deployment running more than a handful of workflows benefits from PostgreSQL’s concurrency handling.
Concurrent Execution Safety
Consider what happens during a typical busy period. A webhook triggers workflow A. A schedule fires workflow B. A manual execution starts workflow C. All three workflows complete within seconds of each other, all trying to write execution results to the database.
With SQLite, these writes queue up behind the file lock. Under heavy load, writes start timing out. Execution data gets lost. In extreme cases, the database file itself becomes corrupted.
PostgreSQL handles this scenario without breaking a sweat. Each execution writes to its own rows with minimal locking. Hundreds of concurrent executions complete without conflict.
Backup Safety
You cannot safely back up SQLite while n8n is running. Any backup taken during a write operation captures a corrupted state. The official recommendation is to stop n8n before backing up, which means downtime every time you want a backup.
PostgreSQL supports hot backups. You can run pg_dump while n8n processes workflows. The backup captures a consistent snapshot without interrupting operations.
Feature Comparison
| Aspect | SQLite | PostgreSQL |
|---|---|---|
| Concurrent writes | File-level locking | Row-level locking |
| Queue mode | Not supported | Required for queue mode |
| Hot backups | Risky without stopping n8n | Fully supported |
| Replication | Not supported | Built-in streaming replication |
| Connection pooling | N/A | Handles hundreds of connections |
| Point-in-time recovery | Not possible | Full PITR support |
| Maximum practical executions | ~100/day | 10,000+/day |
| Setup complexity | Zero configuration | 30-minute setup |
For anything beyond personal testing, PostgreSQL is the clear choice. Our self-hosting mistakes guide covers more details on why SQLite in production leads to problems.
Prerequisites and Planning
Before configuring PostgreSQL, ensure your environment meets the requirements.
PostgreSQL Version Requirements
n8n requires PostgreSQL 13 or higher. Version 15 is recommended for best performance and security. Avoid PostgreSQL 16 initially if using managed services, as some providers still have compatibility issues.
Check your PostgreSQL version:
SELECT version();
Docker or Native Installation
You have two deployment options:
Docker (Recommended)
Run PostgreSQL in a container alongside n8n. This approach isolates the database, simplifies upgrades, and works identically across environments. Our Docker setup guide covers the basics.
Native Installation
Install PostgreSQL directly on the server. This makes sense if you already have PostgreSQL running for other applications or need specific operating system integrations.
For most self-hosted deployments, Docker provides the simplest path forward.
Hardware Recommendations
PostgreSQL resource needs depend on your workflow volume:
| Workload | PostgreSQL CPU | PostgreSQL RAM | Storage |
|---|---|---|---|
| Light (< 1,000 executions/day) | 1 vCPU | 1 GB | 10 GB |
| Medium (1,000-10,000/day) | 2 vCPU | 2 GB | 50 GB |
| Heavy (10,000+/day) | 4 vCPU | 4 GB | 100 GB+ |
Execution data grows quickly. Plan for storage expansion or configure aggressive execution pruning.
The Encryption Key
Critical: Generate your encryption key before any setup.
n8n encrypts credentials in the database using N8N_ENCRYPTION_KEY. This key must remain consistent across all n8n instances and database migrations. Lose the key, and you lose access to all stored credentials permanently.
Generate a secure key:
openssl rand -hex 32
Store this key in a password manager or secrets vault. Back it up separately from your database backups. You’ll need it for every n8n instance that connects to this database.
Fresh PostgreSQL Setup with Docker Compose
This configuration runs PostgreSQL and n8n in containers with proper health checks, persistent storage, and production-ready defaults.
Project Structure
Create a dedicated directory:
mkdir n8n-postgres && cd n8n-postgres
Docker Compose Configuration
Create docker-compose.yml:
services:
postgres:
image: postgres:15-alpine
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
networks:
- n8n-network
n8n:
image: docker.n8n.io/n8nio/n8n
restart: always
ports:
- "5678:5678"
environment:
# Database Configuration
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
# Security
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
# Instance Settings
- N8N_HOST=${N8N_HOST}
- N8N_PORT=5678
- N8N_PROTOCOL=${N8N_PROTOCOL}
- WEBHOOK_URL=${WEBHOOK_URL}
# Timezone
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- TZ=${GENERIC_TIMEZONE}
# Performance
- N8N_RUNNERS_ENABLED=true
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_MAX_AGE=168
- EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000
volumes:
- n8n_data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
networks:
- n8n-network
volumes:
postgres_data:
n8n_data:
networks:
n8n-network:
Environment Variables
Create .env in the same directory:
# PostgreSQL Configuration
POSTGRES_USER=n8n
POSTGRES_PASSWORD=your-secure-database-password-here
POSTGRES_DB=n8n
# n8n Configuration
N8N_ENCRYPTION_KEY=your-32-byte-hex-encryption-key-here
N8N_HOST=localhost
N8N_PROTOCOL=http
WEBHOOK_URL=http://localhost:5678/
GENERIC_TIMEZONE=America/New_York
Replace placeholder values:
- Generate
POSTGRES_PASSWORDwithopenssl rand -base64 24 - Generate
N8N_ENCRYPTION_KEYwithopenssl rand -hex 32 - Set
N8N_HOSTto your domain for production - Change
N8N_PROTOCOLtohttpswhen using SSL
Start the Stack
docker compose up -d
Verify the Setup
Check both services are healthy:
docker compose ps
Expected output shows both containers with “Up” status and postgres healthcheck as “healthy.”
Test the database connection directly:
docker compose exec postgres psql -U n8n -d n8n -c "SELECT 1"
View n8n startup logs:
docker compose logs -f n8n
Look for “n8n ready on” message indicating successful startup.
Understanding the Configuration
Health Check: The healthcheck configuration ensures n8n doesn’t start until PostgreSQL is ready to accept connections. Without this, n8n might try to connect before PostgreSQL initializes, causing startup failures.
Named Volumes: postgres_data and n8n_data persist data across container restarts. Never skip volume configuration. We’ve seen users lose months of work from this single oversight.
Network Isolation: The custom network keeps PostgreSQL inaccessible from outside the Docker environment. Only n8n can reach the database.
Connecting to Managed PostgreSQL
Running PostgreSQL in a container works well for many deployments. But managed database services like AWS RDS, Supabase, DigitalOcean Managed Databases, or Railway reduce operational burden. They handle backups, updates, failover, and scaling automatically.
Environment Variables for Managed Databases
When connecting to an external PostgreSQL instance, update your n8n configuration:
# Database Configuration for Managed PostgreSQL
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=your-database-hostname.amazonaws.com
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=n8n_user
DB_POSTGRESDB_PASSWORD=your-database-password
DB_POSTGRESDB_SCHEMA=public
The DB_POSTGRESDB_SCHEMA setting specifies which schema n8n uses. Most managed databases default to public. Some enterprise setups use custom schemas for isolation.
SSL Configuration
Most managed database providers require SSL connections. n8n supports several SSL configuration options:
# Basic SSL (verify server certificate)
DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=true
# With CA certificate (recommended for production)
DB_POSTGRESDB_SSL_CA=/path/to/ca-certificate.crt
# With client certificate (mutual TLS)
DB_POSTGRESDB_SSL_CA=/path/to/ca-certificate.crt
DB_POSTGRESDB_SSL_CERT=/path/to/client-certificate.crt
DB_POSTGRESDB_SSL_KEY=/path/to/client-key.key
AWS RDS Configuration
Download the RDS CA bundle and configure:
DB_POSTGRESDB_SSL_CA=/path/to/rds-ca-bundle.pem
DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=true
Supabase Configuration
Supabase requires SSL. Use the connection string from your Supabase dashboard and enable SSL:
DB_POSTGRESDB_HOST=db.xxxx.supabase.co
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=true
Note: Some users report connection issues with Supabase’s connection pooler (port 6543). Use the direct connection (port 5432) for n8n.
DigitalOcean Managed Database
Download the CA certificate from your database dashboard:
DB_POSTGRESDB_SSL_CA=/path/to/ca-certificate.crt
DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=true
Docker Volume for Certificates
When running n8n in Docker with SSL certificates, mount the certificate files:
services:
n8n:
volumes:
- n8n_data:/home/node/.n8n
- ./certs:/certs:ro
environment:
- DB_POSTGRESDB_SSL_CA=/certs/ca-certificate.crt
Managed Database Comparison
| Provider | Starting Price | Automatic Backups | Connection Pooling | Best For |
|---|---|---|---|---|
| AWS RDS | ~$15/month | Daily | Via RDS Proxy | Enterprise, existing AWS |
| Supabase | Free tier | Daily | Built-in | Startups, developers |
| DigitalOcean | $15/month | Daily | Not built-in | Simple setup, good docs |
| Railway | $5/month | Daily | Not built-in | Rapid deployment |
| Neon | Free tier | Continuous | Built-in | Serverless, branching |
Choose based on your existing infrastructure, budget, and operational preferences.
Migrating from SQLite to PostgreSQL
Already running n8n with SQLite? Migration preserves your workflows, credentials, and settings. The process requires careful planning but avoids starting from scratch.
Before You Begin
- Back up everything: Copy your entire
.n8ndirectory - Note your encryption key: Check
N8N_ENCRYPTION_KEYenvironment variable - Export workflows: Download JSON exports from n8n’s UI as a secondary backup
- Plan downtime: Migration requires stopping n8n temporarily
Method 1: CLI Export/Import (Recommended)
n8n provides CLI commands for database-agnostic migration.
Step 1: Export from SQLite
Stop n8n first, then export:
# Export workflows and credentials
n8n export:workflow --all --output=./backup/workflows
n8n export:credentials --all --output=./backup/credentials
# Optional: Include execution history (can be very large)
n8n export:entities --outputDir=./backup --includeExecutionHistoryDataTables=true
Step 2: Set Up PostgreSQL
Configure your new PostgreSQL instance using the Docker Compose setup above. Start the database but not n8n yet:
docker compose up -d postgres
Step 3: Configure n8n for PostgreSQL
Update your environment variables to point to PostgreSQL. Keep the same N8N_ENCRYPTION_KEY.
Step 4: Initialize the Database
Start n8n briefly to create tables:
docker compose up -d n8n
Wait for startup, then stop it:
docker compose stop n8n
Step 5: Import Data
# Import credentials first (workflows reference them)
n8n import:credentials --input=./backup/credentials
# Import workflows
n8n import:workflow --input=./backup/workflows
Step 6: Verify and Start
docker compose up -d n8n
Log in and verify your workflows and credentials appear correctly.
Method 2: Direct Database Migration
For complex setups with custom tables or when CLI export fails, use database tools like DBeaver for direct migration.
Critical Table Order
Import tables in this order to satisfy foreign key constraints:
credentials_entityworkflow_entityuserfolderprojectproject_relationtag_entityworkflows_tagswebhook_entityshared_credentialsshared_workflow
Steps:
- Export each table from SQLite to CSV using DB Browser for SQLite
- Create the PostgreSQL database and start n8n once to create tables
- Stop n8n
- Import CSV files into PostgreSQL tables in order
- Update sequences for auto-increment columns
- Start n8n and verify
Common Migration Issues
“Duplicate key” errors during import
n8n creates default records when initializing. Delete these before importing:
DELETE FROM shared_workflow;
DELETE FROM shared_credentials;
DELETE FROM "user" WHERE email = 'default@n8n.io';
Credentials not decrypting
Your N8N_ENCRYPTION_KEY doesn’t match what was used with SQLite. Check your old configuration and ensure the key is identical.
Missing workflows after import
Verify the workflow_entity table imported successfully. Check for import errors in your database tool’s logs.
Version mismatch errors
Ensure both SQLite and PostgreSQL n8n instances run the same version. Version differences cause schema mismatches.
Verification Checklist
After migration, verify:
- All workflows appear in the UI
- Workflow executions run successfully
- Credentials work (test a workflow using each credential)
- Scheduled triggers fire correctly
- Webhook URLs respond
- User accounts can log in
PostgreSQL Performance Tuning
Default PostgreSQL settings work for small deployments. High-volume workflows benefit from tuning.
Connection Pool Settings
n8n manages database connections through a pool. For busy deployments:
# Increase connection pool size (default: 5)
DB_POSTGRESDB_POOL_SIZE=20
Match this to your workflow concurrency. Too few connections cause workflows to wait. Too many overwhelm PostgreSQL.
PostgreSQL Memory Settings
Edit postgresql.conf or set via Docker environment:
# Shared memory for caching
shared_buffers=256MB
# Memory for operations like sorting
work_mem=16MB
# Maintenance operations memory
maintenance_work_mem=128MB
For containerized PostgreSQL, set these via command arguments:
services:
postgres:
command:
- postgres
- -c
- shared_buffers=256MB
- -c
- work_mem=16MB
Execution Data Pruning
Execution history grows continuously. Without pruning, your database balloons and queries slow down.
Configure automatic pruning in n8n:
# Enable pruning
EXECUTIONS_DATA_PRUNE=true
# Keep executions for 7 days (168 hours)
EXECUTIONS_DATA_MAX_AGE=168
# Maximum executions to keep
EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000
# For production: only save failed executions
EXECUTIONS_DATA_SAVE_ON_ERROR=all
EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
Index Maintenance
PostgreSQL indexes degrade over time. Schedule regular maintenance:
-- Analyze tables for query optimization
ANALYZE;
-- Rebuild indexes (run during low-traffic periods)
REINDEX DATABASE n8n;
-- Reclaim space from deleted rows
VACUUM ANALYZE;
For Docker deployments, run these via exec:
docker compose exec postgres psql -U n8n -d n8n -c "VACUUM ANALYZE;"
Monitoring Queries
Identify slow queries impacting performance:
-- Enable query logging
ALTER SYSTEM SET log_min_duration_statement = '1000';
-- Check table sizes
SELECT relname, pg_size_pretty(pg_total_relation_size(relid))
FROM pg_catalog.pg_statio_user_tables
ORDER BY pg_total_relation_size(relid) DESC;
-- Check for blocking queries
SELECT pid, query, state, wait_event_type
FROM pg_stat_activity
WHERE state != 'idle';
For detailed timeout debugging, see our timeout errors guide.
Backup and Recovery
PostgreSQL backups are straightforward but require planning for production deployments.
Basic Backup with pg_dump
Create a database dump:
# From Docker
docker compose exec postgres pg_dump -U n8n -d n8n > backup_$(date +%Y%m%d).sql
# Compressed backup
docker compose exec postgres pg_dump -U n8n -d n8n | gzip > backup_$(date +%Y%m%d).sql.gz
Automated Backup Script
Create a cron job for daily backups:
#!/bin/bash
# /opt/n8n/backup.sh
BACKUP_DIR="/opt/n8n/backups"
RETENTION_DAYS=30
DATE=$(date +%Y%m%d_%H%M%S)
# Create backup
docker compose -f /opt/n8n/docker-compose.yml exec -T postgres \
pg_dump -U n8n -d n8n | gzip > "$BACKUP_DIR/n8n_$DATE.sql.gz"
# Upload to S3 (optional)
aws s3 cp "$BACKUP_DIR/n8n_$DATE.sql.gz" s3://your-bucket/n8n-backups/
# Delete old backups
find "$BACKUP_DIR" -name "n8n_*.sql.gz" -mtime +$RETENTION_DAYS -delete
Add to crontab:
0 2 * * * /opt/n8n/backup.sh >> /var/log/n8n-backup.log 2>&1
Restore from Backup
# Stop n8n first
docker compose stop n8n
# Drop and recreate database
docker compose exec postgres psql -U postgres -c "DROP DATABASE n8n;"
docker compose exec postgres psql -U postgres -c "CREATE DATABASE n8n OWNER n8n;"
# Restore
gunzip -c backup_20241214.sql.gz | docker compose exec -T postgres psql -U n8n -d n8n
# Start n8n
docker compose up -d n8n
Point-in-Time Recovery
For critical deployments, configure PostgreSQL WAL archiving:
# In postgresql.conf
wal_level = replica
archive_mode = on
archive_command = 'cp %p /var/lib/postgresql/archive/%f'
This enables recovery to any point in time, not just backup snapshots.
Testing Restores
Back up without testing restores is not a backup strategy. Schedule quarterly restore tests:
- Create a test PostgreSQL instance
- Restore your latest backup
- Start n8n pointed at the test database
- Verify workflows and credentials work
- Document any issues
For comprehensive credential handling, see our credential management guide.
Troubleshooting Common Issues
PostgreSQL connection problems follow predictable patterns. Here are the most common issues and their solutions.
Connection Refused
Symptom: n8n fails with “Connection refused” or “ECONNREFUSED”
Causes and Solutions:
-
PostgreSQL not running
docker compose ps postgres # Should show "Up" status -
Wrong hostname In Docker, use the service name (
postgres), notlocalhost:DB_POSTGRESDB_HOST=postgres # Correct for Docker DB_POSTGRESDB_HOST=localhost # Wrong in Docker network -
Firewall blocking connection
# Check if port is accessible nc -zv postgres 5432 -
PostgreSQL not accepting connections Check
pg_hba.confallows your connection method.
Authentication Failed
Symptom: “password authentication failed” or “role does not exist”
Solutions:
-
Verify credentials match between
.envand PostgreSQL:docker compose exec postgres psql -U n8n -d n8n # Should connect without error -
Check for special characters in password. Use quotes in
.env:POSTGRES_PASSWORD="p@ss$word!" -
Recreate user if necessary:
DROP USER n8n; CREATE USER n8n WITH PASSWORD 'your-password'; GRANT ALL PRIVILEGES ON DATABASE n8n TO n8n;
SSL Handshake Errors
Symptom: “SSL connection failed” or certificate errors
Solutions:
-
Certificate not found: Verify path and file permissions
ls -la /path/to/ca-certificate.crt -
Self-signed certificate rejected: For testing only:
DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=false -
Wrong certificate: Ensure you’re using the CA certificate, not the server certificate
Port Already in Use
Symptom: “listen EADDRINUSE: address already in use 127.0.0.1:5432”
Solution:
Another PostgreSQL instance is running. Either stop it or use a different port:
services:
postgres:
ports:
- "5433:5432" # Map to different host port
Then update DB_POSTGRESDB_PORT=5433 if connecting from outside Docker.
Out of Memory Errors
Symptom: PostgreSQL crashes or becomes unresponsive under load
Solutions:
-
Increase container memory:
services: postgres: deploy: resources: limits: memory: 2G -
Tune PostgreSQL memory settings (see Performance Tuning)
-
Enable execution pruning to reduce data volume
Slow Queries
Symptom: n8n UI becomes sluggish, executions take longer to start
Solutions:
-
Run
VACUUM ANALYZE:docker compose exec postgres psql -U n8n -d n8n -c "VACUUM ANALYZE;" -
Check for missing indexes:
SELECT schemaname, tablename, attname, n_distinct, correlation FROM pg_stats WHERE schemaname = 'public'; -
Enable and review slow query log
For workflow-specific debugging, use our workflow debugger tool.
When to Seek Expert Help
PostgreSQL configuration is straightforward for experienced administrators. But not every team has database expertise in-house.
Consider professional assistance when:
- You’re migrating a production instance with hundreds of workflows
- High availability requirements demand careful architecture
- Compliance requirements necessitate specific security configurations
- Database performance issues persist despite tuning
- You need queue mode setup with proper worker scaling
Our n8n self-hosted setup service includes PostgreSQL configuration, migration assistance, and ongoing support. For existing deployments needing optimization, our consulting services provide expert guidance.
Frequently Asked Questions
Should I use SQLite or PostgreSQL for n8n?
PostgreSQL for anything beyond personal testing or development. SQLite works fine for learning n8n or building workflows you plan to migrate later. But production deployments need PostgreSQL’s concurrency handling, backup capabilities, and queue mode support. The 30-minute setup investment prevents data loss disasters and enables scaling when you need it.
How do I migrate from SQLite to PostgreSQL without losing data?
Export your workflows and credentials using n8n’s CLI commands (n8n export:workflow --all and n8n export:credentials --all). Set up PostgreSQL, configure n8n to connect to it, start n8n once to create tables, then import your exports. Keep your N8N_ENCRYPTION_KEY identical between instances. The entire process takes about an hour for typical deployments. For complex migrations, our setup service includes migration assistance.
Can I use Supabase, PlanetScale, or other managed databases with n8n?
n8n works with any PostgreSQL-compatible database. Supabase, DigitalOcean Managed Databases, AWS RDS, and Railway all work. Configure SSL as required by your provider. Note that some connection poolers (like Supabase’s port 6543) may cause issues. Use direct connections when possible. PlanetScale uses MySQL, which n8n doesn’t support. Stick with PostgreSQL providers.
What PostgreSQL version does n8n require?
PostgreSQL 13 or higher. Version 15 is recommended for best performance and security features. Most managed database providers offer PostgreSQL 15 by default. Avoid running older versions as they lack important performance optimizations and security patches.
How do I back up my n8n PostgreSQL database?
Use pg_dump for logical backups: docker compose exec postgres pg_dump -U n8n -d n8n > backup.sql. Automate this with cron and upload to cloud storage. For production, also enable WAL archiving for point-in-time recovery. Test your restores quarterly. A backup you haven’t tested isn’t a backup. Keep your N8N_ENCRYPTION_KEY backed up separately. Without it, credential backups are useless.
Next Steps
PostgreSQL transforms n8n from a personal tool into production-ready infrastructure. Start with the Docker Compose configuration for new deployments, or follow the migration guide if you’re moving from SQLite.
Recommended reading:
- n8n Self-Hosting Guide for comprehensive deployment strategies
- n8n Docker Setup for container deployment details
- Queue Mode Configuration when you need to scale beyond a single instance
- Common Self-Hosting Mistakes to avoid costly errors
For official PostgreSQL configuration documentation, see the n8n database settings guide and the PostgreSQL official documentation.
If infrastructure management isn’t your focus, our n8n setup service delivers a production-ready deployment configured for your specific requirements.