n8n Administration Track/Docker & Docker Compose Installation
n8n Administration Track
Module 2 of 6

Docker & Docker Compose Installation

Run n8n in Docker for development and deploy a production-ready stack with PostgreSQL, queue mode, and best practices.

16 min read

What You'll Learn

  • Run n8n in a single Docker container for development and testing in under five minutes
  • Deploy a production-ready n8n instance with Docker Compose, PostgreSQL, and persistent volumes
  • Configure essential environment variables for security, database connections, and webhook URLs
  • Understand queue mode with Redis for handling high execution volumes
  • Apply Docker best practices including version pinning, health checks, restart policies, and backup strategies

Docker Single Container Quickstart

The fastest way to get n8n running is a single Docker command. This approach uses SQLite as the database and stores all data in a Docker volume. It is not suitable for production, but it gets you from zero to a working n8n instance in under two minutes.

Prerequisites: Docker installed and running on your system. Any Linux distribution, macOS, or Windows with Docker Desktop will work.

Quick start command:

docker run -d \
  --name n8n \
  --restart unless-stopped \
  -p 5678:5678 \
  -v n8n_data:/home/node/.n8n \
  n8nio/n8n:latest

This command does the following:

  • -d runs the container in the background (detached mode)
  • --name n8n gives the container a human-readable name
  • --restart unless-stopped ensures n8n restarts automatically after a server reboot or crash
  • -p 5678:5678 maps port 5678 on the host to port 5678 in the container
  • -v n8n_data:/home/node/.n8n creates a named Docker volume for persistent storage
  • n8nio/n8n:latest pulls the latest n8n image from Docker Hub

Once the container starts, open http://localhost:5678 in your browser. You will see the n8n setup screen where you create your owner account.

Why this is not production-ready: SQLite handles single-user development fine, but it does not support concurrent writes. If two workflows execute simultaneously and both try to write to the database, one will fail. SQLite also lacks the replication, backup, and recovery features that PostgreSQL provides. For anything beyond local testing, use Docker Compose with PostgreSQL (covered in the next section).

Stopping and removing:

# Stop the container
docker stop n8n

# Remove the container (data persists in the volume)
docker rm n8n

# Remove the volume (WARNING: deletes all data)
docker volume rm n8n_data

Quick Test: Run Your First n8n Instance

Step 1: Install Docker if you have not already, then run the quickstart command above.

Step 2: Open http://localhost:5678 and create your owner account.

Step 3: Build a simple test workflow: a Manual Trigger node connected to a Set node that outputs your name.

Step 4: Click "Test Workflow" and verify the output. This confirms your Docker setup is working correctly.

Docker Compose with PostgreSQL

Docker Compose is the recommended way to run n8n in production. It defines n8n and PostgreSQL as a multi-container application, manages their networking, and ensures both services start together.

Create a project directory and docker-compose.yml:

version: "3.8"

services:
  n8n:
    image: n8nio/n8n:1.70.2
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=<your-secure-password>
      - N8N_ENCRYPTION_KEY=<your-encryption-key>
      - WEBHOOK_URL=https://n8n.yourdomain.com/
      - N8N_HOST=n8n.yourdomain.com
      - N8N_PROTOCOL=https
      - GENERIC_TIMEZONE=America/Chicago
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "wget -qO- http://localhost:5678/healthz || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3

  postgres:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      - POSTGRES_USER=n8n
      - POSTGRES_PASSWORD=<your-secure-password>
      - POSTGRES_DB=n8n
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U n8n"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  n8n_data:
  postgres_data:

Key configuration points:

  • Pin the n8n image version (e.g., n8nio/n8n:1.70.2) instead of using latest. This prevents unexpected breaking changes during container recreation.
  • N8N_ENCRYPTION_KEY is critical. n8n uses this key to encrypt credentials stored in the database. If you lose this key, all saved credentials become unrecoverable. Generate a strong random string and store it securely.
  • WEBHOOK_URL must be set to the public URL where n8n is accessible. Webhooks will not work correctly without this.
  • Health checks ensure Docker knows when each service is actually ready, not just when the container starts.
  • Named volumes (n8n_data, postgres_data) persist data across container restarts and recreations.

Start the stack:

# Start in detached mode
docker compose up -d

# View logs
docker compose logs -f

# Stop the stack
docker compose down

# Stop and remove volumes (WARNING: deletes all data)
docker compose down -v

Environment Variables Reference

n8n is configured primarily through environment variables. Here are the essential ones for production deployments:

Database Configuration:

VariableDescriptionExample
DB_TYPEDatabase typepostgresdb
DB_POSTGRESDB_HOSTPostgreSQL hostnamepostgres (Docker service name)
DB_POSTGRESDB_PORTPostgreSQL port5432
DB_POSTGRESDB_DATABASEDatabase namen8n
DB_POSTGRESDB_USERDatabase usern8n
DB_POSTGRESDB_PASSWORDDatabase password<your-secure-password>
DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZEDSSL verificationfalse (for managed DB services)

Security and Encryption:

VariableDescriptionExample
N8N_ENCRYPTION_KEYCredential encryption key<random-64-char-string>
N8N_BASIC_AUTH_ACTIVEEnable basic auth (deprecated, use user mgmt)false

Networking and Webhooks:

VariableDescriptionExample
N8N_HOSTHostname for n8nn8n.yourdomain.com
N8N_PORTPort to listen on5678
N8N_PROTOCOLProtocol (http/https)https
WEBHOOK_URLPublic webhook URLhttps://n8n.yourdomain.com/

Execution and Performance:

VariableDescriptionExample
EXECUTIONS_MODEExecution moderegular or queue
EXECUTIONS_DATA_PRUNEAuto-delete old executionstrue
EXECUTIONS_DATA_MAX_AGEMax age in hours168 (7 days)
GENERIC_TIMEZONEDefault timezoneAmerica/Chicago
N8N_CONCURRENCY_PRODUCTION_LIMITMax concurrent executions20

Queue Mode (Redis):

VariableDescriptionExample
QUEUE_BULL_REDIS_HOSTRedis hostnameredis
QUEUE_BULL_REDIS_PORTRedis port6379
QUEUE_BULL_REDIS_PASSWORDRedis password<your-redis-password>

External Services:

VariableDescriptionExample
N8N_SMTP_HOSTSMTP server for emailsmtp.gmail.com
N8N_SMTP_PORTSMTP port587
N8N_SMTP_USERSMTP usernamenotifications@yourdomain.com
N8N_SMTP_PASSSMTP password<your-app-password>
N8N_SMTP_SSLUse SSL for SMTPfalse (use STARTTLS on 587)

Best practice: Never hardcode sensitive values in docker-compose.yml. Use a .env file in the same directory, and add .env to .gitignore:

# .env
POSTGRES_PASSWORD=<your-secure-password>
N8N_ENCRYPTION_KEY=<your-encryption-key>

Then reference them in docker-compose.yml as ${POSTGRES_PASSWORD} and ${N8N_ENCRYPTION_KEY}.

Generate a Strong Encryption Key

Run this command to generate a secure encryption key: openssl rand -hex 32. Store this key in a password manager or secrets vault immediately. If you lose it, every credential saved in n8n becomes permanently inaccessible. There is no recovery mechanism.

Queue Mode with Redis

By default, n8n runs in regular mode where the main process handles everything: the web UI, webhook processing, and workflow execution. This works fine for low to moderate volumes, but when execution counts grow, the main process can become a bottleneck. Long-running workflows block the event loop, webhook responses slow down, and the UI becomes sluggish.

Queue mode solves this by separating concerns. The main process handles the UI and webhook reception, then pushes execution jobs into a Redis queue. Separate worker processes pull jobs from the queue and execute them independently. This means:

  • The main process stays responsive regardless of how many workflows are executing
  • Workers can be scaled horizontally (add more workers to handle more concurrent executions)
  • A failing workflow execution does not impact other executions or the UI
  • Workers can run on different machines, distributing compute load

Adding Redis to your Docker Compose:

services:
  redis:
    image: redis:7-alpine
    restart: unless-stopped
    command: redis-server --requirepass <your-redis-password>
    volumes:
      - redis_data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "-a", "<your-redis-password>", "ping"]
      interval: 10s
      timeout: 5s
      retries: 3

  n8n:
    # ... existing n8n config ...
    environment:
      # ... existing env vars ...
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_BULL_REDIS_PASSWORD=<your-redis-password>

  n8n-worker:
    image: n8nio/n8n:1.70.2
    restart: unless-stopped
    command: worker
    environment:
      # Same DB and Redis config as the main n8n service
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PASSWORD=<your-secure-password>
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PASSWORD=<your-redis-password>
      - N8N_ENCRYPTION_KEY=<your-encryption-key>
    depends_on:
      - redis
      - postgres

volumes:
  redis_data:

When to enable queue mode: If your n8n instance regularly runs more than 20-30 concurrent executions, or if you have long-running workflows (API polling, large data processing, AI operations) that block the main process, queue mode will significantly improve reliability and responsiveness. For most small-to-medium deployments handling under 10,000 executions per day, regular mode is sufficient.

Test Queue Mode Locally

Add Redis and a worker service to your Docker Compose file as shown above. Start the stack with docker compose up -d. Create a workflow with a Manual Trigger connected to a Wait node (set to 30 seconds) followed by a Set node. Execute it and watch the logs: docker compose logs -f n8n-worker. You should see the worker pick up and process the execution while the main UI remains fully responsive.

Docker Best Practices and Troubleshooting

Version pinning is the single most important Docker practice for production n8n. Always specify an exact version tag (e.g., n8nio/n8n:1.70.2) rather than latest. The latest tag can change at any time, and a docker compose pull followed by docker compose up -d could introduce breaking changes into your production environment without warning. Pin the version, test updates in a staging environment first, then update the tag deliberately.

Named volumes are essential for data persistence. Docker containers are ephemeral - when a container is removed and recreated, any data stored inside it is lost. Named volumes (n8n_data, postgres_data) store data outside the container lifecycle. Always use named volumes for n8n's data directory and PostgreSQL's data directory. Verify your volumes exist with docker volume ls.

Restart policies determine what happens when a container crashes or the host reboots:

  • unless-stopped restarts the container automatically unless you explicitly stop it. This is the recommended policy for production.
  • always is similar but also restarts containers you intentionally stopped.
  • on-failure:5 restarts up to 5 times on failure, then stops. Useful for debugging crash loops.

Health checks let Docker and orchestration tools know when a service is actually ready (not just running). The n8n healthcheck endpoint is /healthz. PostgreSQL uses pg_isready. Without health checks, Docker considers a container healthy the moment it starts, which can cause n8n to crash if it tries to connect to PostgreSQL before the database is ready.

Log management prevents disk space issues on long-running servers. Configure Docker's log driver in /etc/docker/daemon.json:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "3"
  }
}

This limits each container's log file to 100MB and keeps a maximum of 3 rotated files. Without this, container logs grow unbounded and can fill your disk.

Common troubleshooting steps:

  1. n8n cannot connect to PostgreSQL: Check that the database service is healthy (docker compose ps), verify the password matches in both services, and ensure depends_on with a health check condition is configured.
  2. Webhooks return 404: Verify WEBHOOK_URL is set to your public URL with a trailing slash. Check that your reverse proxy forwards requests to port 5678.
  3. Credentials lost after restart: The N8N_ENCRYPTION_KEY changed. This key must remain constant across container recreations. Store it in your .env file.
  4. Container restart loop: Check logs with docker compose logs n8n. Common causes: wrong database credentials, port conflicts, or corrupt data volumes.
  5. Disk space running out: Check Docker system usage with docker system df. Prune unused images with docker image prune -a. Ensure log rotation is configured.

Backup Your Encryption Key

The N8N_ENCRYPTION_KEY is used to encrypt all credentials stored in n8n. If you lose this key and recreate your containers, every saved credential (API keys, OAuth tokens, database passwords) becomes permanently inaccessible. Back up this key in a password manager or secrets vault separate from your server.

Core Insights

  • A single Docker command gets n8n running in under 2 minutes for development, but production requires Docker Compose with PostgreSQL for concurrent execution support
  • Always pin your n8n Docker image to a specific version tag - never use latest in production to prevent unexpected breaking changes
  • The N8N_ENCRYPTION_KEY is the most critical configuration value - it encrypts all stored credentials and has no recovery mechanism if lost
  • Queue mode with Redis separates workflow execution from the main process, enabling horizontal scaling and preventing long-running workflows from blocking the UI
  • Configure Docker log rotation in daemon.json to prevent container logs from filling your disk on long-running servers
  • Use .env files for sensitive values and add them to .gitignore - never hardcode passwords or encryption keys in docker-compose.yml