n8n Queue Mode: Process 10,000+ Workflows Without Crashing
n8n Queue Mode: Process 10,000+ Workflows Without Crashing
• Logic Workflow Team

n8n Queue Mode: Process 10,000+ Workflows Without Crashing

#n8n #queue mode #scaling #Redis #workers #DevOps #self-hosting #tutorial

Your n8n instance is about to collapse. You just don’t know it yet.

Everything runs smoothly until it doesn’t. One day you trigger 50 workflows at once, and suddenly your server freezes. The UI becomes unresponsive. Webhooks start timing out. Your automation infrastructure, the backbone of your business operations, grinds to a halt.

This scenario plays out constantly in the n8n community. Users build successful automations, scale their operations, and then watch helplessly as their single n8n instance buckles under the load.

The Breaking Point

Most n8n installations run in “regular mode,” where one process handles everything: the UI, webhooks, triggers, and workflow execution. This works fine for small deployments. But as your automation needs grow, that single process becomes a bottleneck.

Common symptoms include:

  • Workflows timing out during high-traffic periods
  • “JavaScript heap out of memory” errors crashing your instance
  • Webhooks returning 503 errors
  • The editor becoming sluggish or unresponsive
  • Scheduled workflows starting late or skipping entirely

The Solution Most People Miss

Queue mode exists specifically to solve these problems, yet most n8n users don’t know it exists. Queue mode separates workflow execution from the main n8n process, distributing work across multiple dedicated workers.

Instead of one overloaded process trying to do everything, you get a coordinated team: a main instance managing triggers and the UI, Redis handling the job queue, and workers executing workflows in parallel.

Key Insight: Queue mode transforms n8n from a single-threaded application into a distributed system capable of processing thousands of workflows simultaneously.

What You’ll Learn

  • How queue mode architecture actually works under the hood
  • When you need queue mode versus when regular mode is sufficient
  • Complete Docker Compose setup for production deployment
  • Worker configuration and scaling strategies
  • High availability with multi-main setups
  • Monitoring, troubleshooting, and performance optimization
  • Real-world examples with hardware recommendations

How n8n Queue Mode Works

Understanding the architecture helps you make better configuration decisions. In queue mode, n8n splits into three distinct components that work together.

The Main Instance

The main n8n process handles:

  • Web UI and REST API for workflow management
  • Webhook reception and routing
  • Schedule triggers and polling
  • Workflow storage and configuration

When a workflow needs to execute, the main instance doesn’t run it directly. Instead, it creates an execution record in the database and pushes a message to Redis.

The Redis Queue

Redis acts as the message broker between the main instance and workers. It maintains a queue of pending executions and ensures reliable delivery. When the main instance needs to run a workflow, it sends the execution ID to Redis. Redis holds this message until a worker picks it up.

This decoupling is critical. If all workers are busy, new executions wait in the queue rather than overloading any single process. Redis also handles acknowledgments, ensuring no execution gets lost if a worker crashes mid-job.

n8n uses the Bull queue library, which runs on top of Redis. Bull provides job priorities, delayed jobs, rate limiting, and automatic retries. These features make Redis queuing robust enough for production workloads handling thousands of executions daily.

Worker Processes

Workers are separate n8n instances running in worker mode. They:

  • Pull execution messages from Redis
  • Retrieve workflow details from the database
  • Execute the workflow
  • Write results back to the database
  • Notify Redis (and the main instance) when done

Each worker can handle multiple concurrent executions. You can run multiple workers on the same machine or distribute them across your infrastructure.

Queue Mode vs Regular Mode

AspectRegular ModeQueue Mode
Execution handlingSingle processDistributed workers
ScalabilityLimited by one processHorizontally scalable
Failure isolationOne crash affects everythingWorkers fail independently
Resource usageAll on one serverDistributed across infrastructure
ComplexitySimple setupRequires Redis + PostgreSQL
Best forSmall deploymentsProduction workloads

When You Need Queue Mode

Not every n8n deployment needs queue mode. The added complexity only makes sense when you’re hitting real limits.

Signs You’ve Outgrown Single-Instance n8n

Memory Pressure

If you see “Allocation failed - JavaScript heap out of memory” errors, your workflows are consuming more memory than your single instance can handle. This happens when:

  • Processing large JSON payloads (100MB+)
  • Handling binary files (images, PDFs, spreadsheets)
  • Running many concurrent workflows (10+)
  • Using memory-heavy nodes like Code nodes

Execution Bottlenecks

Your workflows start queueing up internally, leading to:

  • Scheduled workflows running late
  • Webhook responses timing out
  • Long delays between trigger and execution

UI Responsiveness Issues

When the same process handles both execution and the UI, heavy workflows make the editor sluggish or unresponsive.


High Availability Requirements

If your automation is business-critical, a single point of failure is unacceptable. Queue mode enables architectures where components can fail independently without total system outage.

Decision Framework

ScenarioRecommendation
Personal projects, testing, developmentRegular mode
Small business, < 1,000 executions/dayRegular mode (monitor closely)
Production workloads, 1,000-10,000 executions/dayQueue mode with 2-3 workers
Enterprise, > 10,000 executions/dayQueue mode with auto-scaling workers
Mission-critical automationQueue mode with multi-main HA

Prerequisites and Requirements

Before setting up queue mode, you need the right infrastructure foundation.

Important: Queue mode requires PostgreSQL and Redis. SQLite is not supported.

Database: PostgreSQL Required

Queue mode does not work with SQLite. You need PostgreSQL 13 or higher.

Why PostgreSQL?

  • Handles concurrent connections from multiple n8n instances
  • Supports the transaction isolation queue mode requires
  • Enables proper locking for distributed execution
  • Provides better performance at scale

If you’re currently on SQLite, migrate first. Our self-hosting mistakes guide covers the migration process.

Redis Server

You need a Redis instance accessible to all n8n processes. Redis handles:

  • Execution queue management
  • Inter-process messaging
  • Leader election (for multi-main setups)

For production, use Redis 6.0 or higher. Consider managed Redis services (AWS ElastiCache, Redis Cloud, DigitalOcean Managed Redis) to reduce operational burden.

Encryption Key Consistency

Critical: All n8n instances must share the exact same encryption key.

All n8n instances (main and workers) must share the same encryption key. This key encrypts credentials in the database. If workers have a different key, they can’t decrypt credentials and workflows will fail.

Set N8N_ENCRYPTION_KEY identically across all instances. Generate a strong key once and reuse it everywhere:

# Generate a secure encryption key
openssl rand -hex 32

Version Consistency

All n8n processes must run the same version. Mixing versions causes unpredictable behavior because the database schema and execution format may differ.

When upgrading, update all instances together in a coordinated deployment. Use container image tags with specific versions (like docker.n8n.io/n8nio/n8n:1.67.1) rather than latest to prevent accidental version mismatches during container restarts.

Network Connectivity

All components need reliable network access to each other:

  • Main instance to Redis (queue operations)
  • Main instance to PostgreSQL (data storage)
  • Workers to Redis (job pickup and acknowledgment)
  • Workers to PostgreSQL (workflow data and results)

In Docker or Kubernetes environments, use internal DNS names rather than IP addresses. This prevents breakage when containers restart with new addresses.

Complete Docker Compose Setup

Here’s a production-ready Docker Compose configuration for n8n queue mode with all required services.

Copy-paste ready: This configuration includes health checks, proper dependencies, and production settings.

docker-compose.yml

version: '3.8'

services:
  postgres:
    image: postgres:15
    restart: always
    environment:
      POSTGRES_USER: n8n
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: n8n
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U n8n"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    restart: always
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5

  n8n-main:
    image: docker.n8n.io/n8nio/n8n
    restart: always
    ports:
      - "5678:5678"
    environment:
      # Database
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}

      # Queue Mode
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PORT=6379

      # Security
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}

      # Instance Configuration
      - N8N_HOST=${N8N_HOST}
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://${N8N_HOST}/

      # Performance
      - EXECUTIONS_DATA_PRUNE=true
      - EXECUTIONS_DATA_MAX_AGE=168
      - EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy

  n8n-worker:
    image: docker.n8n.io/n8nio/n8n
    restart: always
    command: worker
    environment:
      # Database (same as main)
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}

      # Queue Mode
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PORT=6379

      # Security (MUST match main instance)
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}

      # Worker Configuration
      - QUEUE_WORKER_CONCURRENCY=10

      # Memory Optimization
      - NODE_OPTIONS=--max-old-space-size=2048
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy

volumes:
  postgres_data:
  redis_data:
  n8n_data:

Environment File (.env)

Create a .env file alongside your docker-compose.yml:

# Database
POSTGRES_PASSWORD=your-secure-postgres-password

# n8n Configuration
N8N_ENCRYPTION_KEY=your-32-byte-hex-encryption-key
N8N_HOST=n8n.yourdomain.com

Starting the Stack

# Start all services
docker compose up -d

# Check service health
docker compose ps

# View logs
docker compose logs -f n8n-main
docker compose logs -f n8n-worker

Scaling Workers

Add more workers by scaling the service:

# Scale to 3 workers
docker compose up -d --scale n8n-worker=3

# Check all workers are running
docker compose ps

For more on avoiding common deployment pitfalls, see our guide on n8n self-hosting mistakes.

Configuring Workers

Workers are the execution engines of queue mode. Proper configuration ensures optimal throughput without resource exhaustion.

Concurrency Settings

The QUEUE_WORKER_CONCURRENCY variable controls how many workflow executions a single worker handles simultaneously. Default is 10.

# Conservative setting for memory-heavy workflows
QUEUE_WORKER_CONCURRENCY=5

# Aggressive setting for lightweight workflows
QUEUE_WORKER_CONCURRENCY=20

Guidelines for setting concurrency:

  • Start with CPU cores available to the worker
  • Reduce if workflows process large files or heavy JSON payloads
  • Increase if workflows are mostly I/O-bound (waiting on API responses)
  • Monitor memory and adjust accordingly

Resource Allocation

Each worker needs adequate resources:

Workload TypeRecommended CPURecommended RAM
Light (API calls, simple transforms)1 vCPU1 GB
Medium (moderate data, some file processing)2 vCPU2 GB
Heavy (large files, complex Code nodes)4 vCPU4 GB

The NODE_OPTIONS environment variable controls Node.js memory allocation:

# Allow up to 2GB of heap memory
NODE_OPTIONS=--max-old-space-size=2048

# For heavy workloads, increase to 4GB
NODE_OPTIONS=--max-old-space-size=4096

Worker Isolation Strategies

For production deployments, consider isolating workers by workload type.

Webhook-dedicated workers

Some organizations run workers specifically for webhook-triggered workflows to ensure fast response times:

n8n-worker-webhooks:
  image: docker.n8n.io/n8nio/n8n
  command: worker --queue webhooks
  environment:
    - QUEUE_WORKER_CONCURRENCY=20
    # ... other settings

Heavy workflow workers

Isolate resource-intensive workflows to prevent them from blocking lighter operations:

n8n-worker-heavy:
  image: docker.n8n.io/n8nio/n8n
  command: worker
  environment:
    - QUEUE_WORKER_CONCURRENCY=3
    - NODE_OPTIONS=--max-old-space-size=8192
    # ... other settings

Auto-Scaling Workers

For dynamic workloads, auto-scaling workers based on queue length provides efficient resource usage.

Docker-based auto-scaling

The community has developed auto-scaling solutions that monitor Redis queue length and spawn or terminate worker containers accordingly. This approach works without Kubernetes.

Kubernetes auto-scaling

If you’re on Kubernetes, configure Horizontal Pod Autoscalers based on custom metrics from your Redis queue:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: n8n-worker-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: n8n-worker
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: External
      external:
        metric:
          name: redis_queue_length
        target:
          type: AverageValue
          averageValue: 50

For Kubernetes deployments, the n8n Helm chart documentation provides detailed guidance.

High Availability with Multi-Main

Standard queue mode still has a single point of failure: the main instance. If it goes down, no new workflows can be triggered, and the UI becomes inaccessible.

Multi-main mode eliminates this by running multiple main instances simultaneously.

When You Need Multi-Main

Consider multi-main if:

  • Downtime is unacceptable for your business
  • You need zero-downtime deployments
  • Regulatory or SLA requirements mandate high availability

Architecture Overview

In multi-main mode:

  • Multiple main instances run behind a load balancer
  • All instances connect to the same PostgreSQL and Redis
  • One instance becomes the “leader” and handles scheduled triggers
  • Other instances serve UI requests and receive webhooks
  • If the leader fails, another instance takes over automatically

Configuration Requirements

Enable multi-main on all main instances:

# Required for all main instances
N8N_MULTI_MAIN_SETUP_ENABLED=true
EXECUTIONS_MODE=queue

Load Balancer Configuration

Your load balancer must support sticky sessions (session affinity). The n8n UI uses WebSocket connections that must route consistently to the same backend.

Nginx example:

upstream n8n_main {
    ip_hash;  # Sticky sessions based on client IP
    server n8n-main-1:5678;
    server n8n-main-2:5678;
    server n8n-main-3:5678;
}

server {
    listen 443 ssl;
    server_name n8n.yourdomain.com;

    location / {
        proxy_pass http://n8n_main;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Failover Behavior

When the current leader becomes unavailable:

  1. Other instances detect the leader is unresponsive
  2. An election process selects a new leader
  3. The new leader takes over scheduled triggers
  4. UI and webhook traffic continue on all healthy instances

This failover typically completes in seconds, minimizing disruption.

Monitoring and Troubleshooting

Production queue mode deployments need observability. You should know when problems occur before users report them.

Prometheus Metrics

n8n exposes metrics for queue monitoring. Enable the metrics endpoint:

N8N_METRICS=true
N8N_METRICS_INCLUDE_DEFAULT_METRICS=true
N8N_METRICS_INCLUDE_QUEUE_METRICS=true

Key metrics to watch:

# Jobs waiting in queue
n8n_scaling_mode_queue_jobs_waiting

# Jobs currently being processed
n8n_scaling_mode_queue_jobs_active

# Completed job count
n8n_scaling_mode_queue_jobs_completed

# Failed job count
n8n_scaling_mode_queue_jobs_failed

Set up alerts when:

  • n8n_scaling_mode_queue_jobs_waiting stays high (queue backlog)
  • n8n_scaling_mode_queue_jobs_failed increases (execution failures)
  • n8n_scaling_mode_queue_jobs_active equals worker capacity (saturation)

Common Problems and Solutions


Problem: Workers not processing jobs

Symptoms: Jobs stuck in “waiting” state, workers appear idle.

Causes and fixes:

  1. Redis connection issues — Check worker logs for connection errors. Verify Redis is accessible from workers.

  2. Encryption key mismatch — Workers can’t decrypt credentials. Ensure N8N_ENCRYPTION_KEY matches across all instances.

  3. Database connection exhausted — Too many workers competing for connections. Reduce worker count or increase PostgreSQL max_connections.

  4. Version mismatch — Workers running different n8n version. Update all instances to the same version.


Problem: Memory errors on workers

Symptoms: Workers crash with “heap out of memory” errors.

Fixes:

  1. Increase memory — NODE_OPTIONS=--max-old-space-size=XXXX
  2. Reduce concurrency — Lower QUEUE_WORKER_CONCURRENCY
  3. Split workflows — Break large workflows into sub-workflows
  4. Batch data — Process in smaller chunks using the Loop Over Items node

For timeout-related issues, our timeout troubleshooting guide covers additional debugging steps.


Problem: Redis connection timeouts

Symptoms: Sporadic failures, jobs not acknowledged.

Fixes:

  1. Check latency — Network latency between n8n and Redis
  2. Increase timeout — Adjust Redis connection timeout settings
  3. Add resilience — Use Redis Cluster or Sentinel for failover
  4. Go managed — Consider managed Redis services for production

Problem: Queue growing unbounded

Symptoms: Jobs waiting count keeps increasing, never decreasing.

Fixes:

  1. Add workers — Scale up worker instances
  2. Increase concurrency — Raise QUEUE_WORKER_CONCURRENCY
  3. Optimize workflows — Identify slow workflows and improve them
  4. Check for loops — Look for workflows stuck in infinite loops

Use our workflow debugger tool to identify problematic workflows.

Performance Optimization

Beyond basic queue mode setup, several optimizations improve throughput and reliability.

Memory Management

Node.js garbage collection can cause execution pauses. Tune memory settings based on your workload:

# Standard workloads
NODE_OPTIONS=--max-old-space-size=2048

# Heavy workloads with large payloads
NODE_OPTIONS=--max-old-space-size=4096 --max-semi-space-size=128

Execution Data Pruning

Old execution data accumulates and slows queries. Configure automatic pruning:

# Enable pruning
EXECUTIONS_DATA_PRUNE=true

# Keep executions for 7 days (168 hours)
EXECUTIONS_DATA_MAX_AGE=168

# Keep maximum 50,000 executions
EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000

For production systems, consider only saving failed executions:

EXECUTIONS_DATA_SAVE_ON_ERROR=all
EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false

Concurrency Limits

Even with queue mode, you may want to limit total concurrent executions to prevent resource exhaustion:

# Limit production executions to 20 concurrent
N8N_CONCURRENCY_PRODUCTION_LIMIT=20

This creates a secondary queue at the application level, providing another layer of protection against overload.

Binary Data Storage

Warning: Queue mode with binary data requires special handling. The default filesystem storage doesn’t work across distributed workers.

Use S3-compatible storage for binary data:

N8N_AVAILABLE_BINARY_DATA_MODES=filesystem,s3
N8N_DEFAULT_BINARY_DATA_MODE=s3

# S3 Configuration
N8N_EXTERNAL_STORAGE_S3_HOST=s3.amazonaws.com
N8N_EXTERNAL_STORAGE_S3_BUCKET_NAME=your-bucket
N8N_EXTERNAL_STORAGE_S3_BUCKET_REGION=us-east-1
N8N_EXTERNAL_STORAGE_S3_ACCESS_KEY=your-access-key
N8N_EXTERNAL_STORAGE_S3_ACCESS_SECRET=your-secret-key

This ensures all workers can access binary data regardless of which worker initially received it.

Database Optimization

PostgreSQL performance directly impacts queue mode. Key optimizations:

  1. Connection pooling — Use PgBouncer for many concurrent workers
  2. Index maintenance — Regular VACUUM and ANALYZE operations
  3. Adequate resources — At minimum 2 vCPU, 4 GB RAM for PostgreSQL

For comprehensive credential handling across distributed workers, see our credential management guide.

Real-World Scaling Examples

Theory helps, but real examples show what’s possible.


Example 1: E-commerce Order Processing

Scenario: Online retailer processing 5,000 orders daily through n8n workflows.

Previous setup: Single n8n instance frequently crashed during sales events.

Queue mode configuration:

  • 1 main instance (2 vCPU, 4 GB RAM)
  • 3 workers (2 vCPU, 2 GB RAM each)
  • Concurrency: 15 per worker
  • Total capacity: 45 concurrent executions

Results:

  • Handles peak loads of 200+ orders/hour without issues
  • 99.9% uptime over 6 months
  • No more memory-related crashes

Example 2: Marketing Automation Platform

Scenario: Agency managing automated campaigns for 50+ clients, triggering 10,000+ workflow executions daily.

Previous setup: Multiple separate n8n instances, management nightmare.

Queue mode configuration:

  • 2 main instances (multi-main HA)
  • 5 workers with auto-scaling (min 3, max 8)
  • S3 for binary data storage
  • Managed PostgreSQL (AWS RDS)
  • Managed Redis (AWS ElastiCache)

Results:

  • Consolidated to single logical n8n deployment
  • Handles traffic spikes from campaign launches automatically
  • Zero-downtime deployments using rolling updates

Example 3: Data Integration Hub

Scenario: Financial services firm synchronizing data between 15 systems, running complex ETL workflows every 15 minutes.

Challenges: Workflows processing millions of records, each execution consuming significant memory and CPU.

Queue mode configuration:

  • 1 main instance behind CloudFlare for DDoS protection
  • 6 dedicated workers with 8GB RAM each
  • Worker concurrency: 3 (heavy workloads)
  • PostgreSQL with read replicas for reporting
  • Redis Sentinel for automatic failover

Key optimizations:

  • Workflows split into sub-workflows to reduce memory per execution
  • Data batched into 1,000-record chunks
  • Execution pruning keeping only 24 hours of history
  • Prometheus monitoring with PagerDuty alerts

Results:

  • Processes 50+ million records daily without issues
  • 99.95% uptime over 12 months
  • Recovery from any single component failure in under 60 seconds

Hardware Recommendations

Daily ExecutionsMain InstanceWorkersPostgreSQLRedis
1,000-5,0002 vCPU, 4 GB2x (2 vCPU, 2 GB)2 vCPU, 4 GB1 vCPU, 1 GB
5,000-20,0004 vCPU, 8 GB4x (2 vCPU, 4 GB)4 vCPU, 8 GB2 vCPU, 2 GB
20,000+Multi-main HAAuto-scaling (6+)Managed serviceManaged service

These are starting points. Monitor actual resource usage and adjust accordingly.

For workflow optimization tips that reduce execution times and resource usage, check our n8n workflow best practices guide.

Security Considerations

Distributed systems introduce security surface area. Protect your queue mode deployment with these practices.

Redis Authentication

Never run Redis without a password in production:

# In your Redis configuration or Docker command
redis-server --requirepass your-redis-password

# In n8n environment
QUEUE_BULL_REDIS_PASSWORD=your-redis-password

Network Segmentation

Keep Redis and PostgreSQL on private networks inaccessible from the public internet. Only the main n8n instance needs external access for the UI and webhooks.

In Docker Compose, use internal networks:

networks:
  internal:
    internal: true  # No external access
  external:
    # External access for n8n UI

Secrets Management

Avoid hardcoding credentials in configuration files. Use environment variables, Docker secrets, or dedicated secrets managers like HashiCorp Vault. For cloud deployments, AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager integrate well with container orchestrators.

When to Get Expert Help

Queue mode setup is straightforward for experienced DevOps engineers. But not every organization has that expertise in-house.

Consider professional help when:

  • Your team lacks Docker/Kubernetes experience
  • You need to migrate from an existing production instance
  • High availability requirements demand careful planning
  • You’re processing sensitive data with compliance requirements
  • Initial setup seems overwhelming or outside your comfort zone

Our n8n self-hosted setup service handles the infrastructure configuration, and our consulting packages help optimize existing deployments.

Frequently Asked Questions

Can I run queue mode on a single server?

Yes. Queue mode works on a single server with main, workers, Redis, and PostgreSQL all running locally. This still provides benefits: failure isolation between the main process and workers, ability to scale workers up or down, and better resource management. You lose high availability (the server is still a single point of failure), but you gain execution scalability.


How many workers do I need?

Start with 2-3 workers and scale based on monitoring. Key metrics to watch: queue wait time (how long jobs sit before processing) and worker CPU/memory utilization. If jobs wait more than a few seconds consistently, add workers. If workers are idle most of the time, reduce them.

Rule of thumb: For CPU-bound workflows, one worker per CPU core; for I/O-bound workflows (mostly waiting on APIs), you can run more workers than cores.


What happens if Redis goes down?

If Redis becomes unavailable, the queue breaks. Triggers and webhooks still reach the main instance, but executions can’t be dispatched to workers. Depending on configuration, new executions either fail immediately or wait for Redis to recover.

For production systems, use Redis with persistence enabled (appendonly yes) and consider Redis Sentinel or Cluster for automatic failover. Managed Redis services handle this automatically.


Can I use RabbitMQ instead of Redis?

No. As of the current n8n version, Redis is the only supported message broker for queue mode. While RabbitMQ offers advanced routing features, n8n’s queue implementation is built on Bull, which requires Redis.

Some users have requested RabbitMQ support in community discussions, but there’s no official timeline. If you need RabbitMQ for other applications, you can run both Redis (for n8n) and RabbitMQ (for other services) in your infrastructure.


How do I migrate from regular mode to queue mode?

Migration requires careful planning to avoid data loss. The high-level process:

  1. Set up PostgreSQL if you’re currently on SQLite and migrate data
  2. Deploy Redis
  3. Update n8n configuration to enable queue mode
  4. Deploy worker instances with matching configuration
  5. Restart main instance with queue mode enabled

For production systems with active workflows, consider a maintenance window. Test the configuration in a staging environment first. Our consulting services include guided migrations for teams that want expert assistance with this process.

Ready to Automate Your Business?

Tell us what you need automated. We'll build it, test it, and deploy it fast.

âś“ 48-72 Hour Turnaround
âś“ Production Ready
âś“ Free Consultation
⚡

Create Your Free Account

Sign up once, use all tools free forever. We require accounts to prevent abuse and keep our tools running for everyone.

or

You're in!

Check your email for next steps.

By signing up, you agree to our Terms of Service and Privacy Policy. No spam, unsubscribe anytime.

🚀

Get Expert Help

Add your email and one of our n8n experts will reach out to help with your automation needs.

or

We'll be in touch!

One of our experts will reach out soon.

By submitting, you agree to our Terms of Service and Privacy Policy. No spam, unsubscribe anytime.