Docker & Docker Compose Installation
Run n8n in Docker for development and deploy a production-ready stack with PostgreSQL, queue mode, and best practices.
What You'll Learn
- Run n8n in a single Docker container for development and testing in under five minutes
- Deploy a production-ready n8n instance with Docker Compose, PostgreSQL, and persistent volumes
- Configure essential environment variables for security, database connections, and webhook URLs
- Understand queue mode with Redis for handling high execution volumes
- Apply Docker best practices including version pinning, health checks, restart policies, and backup strategies
Docker Single Container Quickstart
The fastest way to get n8n running is a single Docker command. This approach uses SQLite as the database and stores all data in a Docker volume. It is not suitable for production, but it gets you from zero to a working n8n instance in under two minutes.
Prerequisites: Docker installed and running on your system. Any Linux distribution, macOS, or Windows with Docker Desktop will work.
Quick start command:
docker run -d \
--name n8n \
--restart unless-stopped \
-p 5678:5678 \
-v n8n_data:/home/node/.n8n \
n8nio/n8n:latest
This command does the following:
-druns the container in the background (detached mode)--name n8ngives the container a human-readable name--restart unless-stoppedensures n8n restarts automatically after a server reboot or crash-p 5678:5678maps port 5678 on the host to port 5678 in the container-v n8n_data:/home/node/.n8ncreates a named Docker volume for persistent storagen8nio/n8n:latestpulls the latest n8n image from Docker Hub
Once the container starts, open http://localhost:5678 in your browser. You will see the n8n setup screen where you create your owner account.
Why this is not production-ready: SQLite handles single-user development fine, but it does not support concurrent writes. If two workflows execute simultaneously and both try to write to the database, one will fail. SQLite also lacks the replication, backup, and recovery features that PostgreSQL provides. For anything beyond local testing, use Docker Compose with PostgreSQL (covered in the next section).
Stopping and removing:
# Stop the container
docker stop n8n
# Remove the container (data persists in the volume)
docker rm n8n
# Remove the volume (WARNING: deletes all data)
docker volume rm n8n_data
Quick Test: Run Your First n8n Instance
Step 1: Install Docker if you have not already, then run the quickstart command above.
Step 2: Open http://localhost:5678 and create your owner account.
Step 3: Build a simple test workflow: a Manual Trigger node connected to a Set node that outputs your name.
Step 4: Click "Test Workflow" and verify the output. This confirms your Docker setup is working correctly.
Docker Compose with PostgreSQL
Docker Compose is the recommended way to run n8n in production. It defines n8n and PostgreSQL as a multi-container application, manages their networking, and ensures both services start together.
Create a project directory and docker-compose.yml:
version: "3.8"
services:
n8n:
image: n8nio/n8n:1.70.2
restart: unless-stopped
ports:
- "5678:5678"
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=<your-secure-password>
- N8N_ENCRYPTION_KEY=<your-encryption-key>
- WEBHOOK_URL=https://n8n.yourdomain.com/
- N8N_HOST=n8n.yourdomain.com
- N8N_PROTOCOL=https
- GENERIC_TIMEZONE=America/Chicago
volumes:
- n8n_data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://localhost:5678/healthz || exit 1"]
interval: 30s
timeout: 10s
retries: 3
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=<your-secure-password>
- POSTGRES_DB=n8n
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U n8n"]
interval: 10s
timeout: 5s
retries: 5
volumes:
n8n_data:
postgres_data:
Key configuration points:
- Pin the n8n image version (e.g.,
n8nio/n8n:1.70.2) instead of usinglatest. This prevents unexpected breaking changes during container recreation. N8N_ENCRYPTION_KEYis critical. n8n uses this key to encrypt credentials stored in the database. If you lose this key, all saved credentials become unrecoverable. Generate a strong random string and store it securely.WEBHOOK_URLmust be set to the public URL where n8n is accessible. Webhooks will not work correctly without this.- Health checks ensure Docker knows when each service is actually ready, not just when the container starts.
- Named volumes (
n8n_data,postgres_data) persist data across container restarts and recreations.
Start the stack:
# Start in detached mode
docker compose up -d
# View logs
docker compose logs -f
# Stop the stack
docker compose down
# Stop and remove volumes (WARNING: deletes all data)
docker compose down -v
Environment Variables Reference
n8n is configured primarily through environment variables. Here are the essential ones for production deployments:
Database Configuration:
| Variable | Description | Example |
|---|---|---|
DB_TYPE | Database type | postgresdb |
DB_POSTGRESDB_HOST | PostgreSQL hostname | postgres (Docker service name) |
DB_POSTGRESDB_PORT | PostgreSQL port | 5432 |
DB_POSTGRESDB_DATABASE | Database name | n8n |
DB_POSTGRESDB_USER | Database user | n8n |
DB_POSTGRESDB_PASSWORD | Database password | <your-secure-password> |
DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED | SSL verification | false (for managed DB services) |
Security and Encryption:
| Variable | Description | Example |
|---|---|---|
N8N_ENCRYPTION_KEY | Credential encryption key | <random-64-char-string> |
N8N_BASIC_AUTH_ACTIVE | Enable basic auth (deprecated, use user mgmt) | false |
Networking and Webhooks:
| Variable | Description | Example |
|---|---|---|
N8N_HOST | Hostname for n8n | n8n.yourdomain.com |
N8N_PORT | Port to listen on | 5678 |
N8N_PROTOCOL | Protocol (http/https) | https |
WEBHOOK_URL | Public webhook URL | https://n8n.yourdomain.com/ |
Execution and Performance:
| Variable | Description | Example |
|---|---|---|
EXECUTIONS_MODE | Execution mode | regular or queue |
EXECUTIONS_DATA_PRUNE | Auto-delete old executions | true |
EXECUTIONS_DATA_MAX_AGE | Max age in hours | 168 (7 days) |
GENERIC_TIMEZONE | Default timezone | America/Chicago |
N8N_CONCURRENCY_PRODUCTION_LIMIT | Max concurrent executions | 20 |
Queue Mode (Redis):
| Variable | Description | Example |
|---|---|---|
QUEUE_BULL_REDIS_HOST | Redis hostname | redis |
QUEUE_BULL_REDIS_PORT | Redis port | 6379 |
QUEUE_BULL_REDIS_PASSWORD | Redis password | <your-redis-password> |
External Services:
| Variable | Description | Example |
|---|---|---|
N8N_SMTP_HOST | SMTP server for email | smtp.gmail.com |
N8N_SMTP_PORT | SMTP port | 587 |
N8N_SMTP_USER | SMTP username | notifications@yourdomain.com |
N8N_SMTP_PASS | SMTP password | <your-app-password> |
N8N_SMTP_SSL | Use SSL for SMTP | false (use STARTTLS on 587) |
Best practice: Never hardcode sensitive values in docker-compose.yml. Use a .env file in the same directory, and add .env to .gitignore:
# .env
POSTGRES_PASSWORD=<your-secure-password>
N8N_ENCRYPTION_KEY=<your-encryption-key>
Then reference them in docker-compose.yml as ${POSTGRES_PASSWORD} and ${N8N_ENCRYPTION_KEY}.
Generate a Strong Encryption Key
Run this command to generate a secure encryption key: openssl rand -hex 32. Store this key in a password manager or secrets vault immediately. If you lose it, every credential saved in n8n becomes permanently inaccessible. There is no recovery mechanism.
Queue Mode with Redis
By default, n8n runs in regular mode where the main process handles everything: the web UI, webhook processing, and workflow execution. This works fine for low to moderate volumes, but when execution counts grow, the main process can become a bottleneck. Long-running workflows block the event loop, webhook responses slow down, and the UI becomes sluggish.
Queue mode solves this by separating concerns. The main process handles the UI and webhook reception, then pushes execution jobs into a Redis queue. Separate worker processes pull jobs from the queue and execute them independently. This means:
- The main process stays responsive regardless of how many workflows are executing
- Workers can be scaled horizontally (add more workers to handle more concurrent executions)
- A failing workflow execution does not impact other executions or the UI
- Workers can run on different machines, distributing compute load
Adding Redis to your Docker Compose:
services:
redis:
image: redis:7-alpine
restart: unless-stopped
command: redis-server --requirepass <your-redis-password>
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "-a", "<your-redis-password>", "ping"]
interval: 10s
timeout: 5s
retries: 3
n8n:
# ... existing n8n config ...
environment:
# ... existing env vars ...
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
- QUEUE_BULL_REDIS_PASSWORD=<your-redis-password>
n8n-worker:
image: n8nio/n8n:1.70.2
restart: unless-stopped
command: worker
environment:
# Same DB and Redis config as the main n8n service
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PASSWORD=<your-secure-password>
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PASSWORD=<your-redis-password>
- N8N_ENCRYPTION_KEY=<your-encryption-key>
depends_on:
- redis
- postgres
volumes:
redis_data:
When to enable queue mode: If your n8n instance regularly runs more than 20-30 concurrent executions, or if you have long-running workflows (API polling, large data processing, AI operations) that block the main process, queue mode will significantly improve reliability and responsiveness. For most small-to-medium deployments handling under 10,000 executions per day, regular mode is sufficient.
Test Queue Mode Locally
Add Redis and a worker service to your Docker Compose file as shown above. Start the stack with docker compose up -d. Create a workflow with a Manual Trigger connected to a Wait node (set to 30 seconds) followed by a Set node. Execute it and watch the logs: docker compose logs -f n8n-worker. You should see the worker pick up and process the execution while the main UI remains fully responsive.
Docker Best Practices and Troubleshooting
Version pinning is the single most important Docker practice for production n8n. Always specify an exact version tag (e.g., n8nio/n8n:1.70.2) rather than latest. The latest tag can change at any time, and a docker compose pull followed by docker compose up -d could introduce breaking changes into your production environment without warning. Pin the version, test updates in a staging environment first, then update the tag deliberately.
Named volumes are essential for data persistence. Docker containers are ephemeral - when a container is removed and recreated, any data stored inside it is lost. Named volumes (n8n_data, postgres_data) store data outside the container lifecycle. Always use named volumes for n8n's data directory and PostgreSQL's data directory. Verify your volumes exist with docker volume ls.
Restart policies determine what happens when a container crashes or the host reboots:
unless-stoppedrestarts the container automatically unless you explicitly stop it. This is the recommended policy for production.alwaysis similar but also restarts containers you intentionally stopped.on-failure:5restarts up to 5 times on failure, then stops. Useful for debugging crash loops.
Health checks let Docker and orchestration tools know when a service is actually ready (not just running). The n8n healthcheck endpoint is /healthz. PostgreSQL uses pg_isready. Without health checks, Docker considers a container healthy the moment it starts, which can cause n8n to crash if it tries to connect to PostgreSQL before the database is ready.
Log management prevents disk space issues on long-running servers. Configure Docker's log driver in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
}
}
This limits each container's log file to 100MB and keeps a maximum of 3 rotated files. Without this, container logs grow unbounded and can fill your disk.
Common troubleshooting steps:
- n8n cannot connect to PostgreSQL: Check that the database service is healthy (
docker compose ps), verify the password matches in both services, and ensuredepends_onwith a health check condition is configured. - Webhooks return 404: Verify
WEBHOOK_URLis set to your public URL with a trailing slash. Check that your reverse proxy forwards requests to port 5678. - Credentials lost after restart: The
N8N_ENCRYPTION_KEYchanged. This key must remain constant across container recreations. Store it in your.envfile. - Container restart loop: Check logs with
docker compose logs n8n. Common causes: wrong database credentials, port conflicts, or corrupt data volumes. - Disk space running out: Check Docker system usage with
docker system df. Prune unused images withdocker image prune -a. Ensure log rotation is configured.
Backup Your Encryption Key
The N8N_ENCRYPTION_KEY is used to encrypt all credentials stored in n8n. If you lose this key and recreate your containers, every saved credential (API keys, OAuth tokens, database passwords) becomes permanently inaccessible. Back up this key in a password manager or secrets vault separate from your server.
Core Insights
- A single Docker command gets n8n running in under 2 minutes for development, but production requires Docker Compose with PostgreSQL for concurrent execution support
- Always pin your n8n Docker image to a specific version tag - never use latest in production to prevent unexpected breaking changes
- The N8N_ENCRYPTION_KEY is the most critical configuration value - it encrypts all stored credentials and has no recovery mechanism if lost
- Queue mode with Redis separates workflow execution from the main process, enabling horizontal scaling and preventing long-running workflows from blocking the UI
- Configure Docker log rotation in daemon.json to prevent container logs from filling your disk on long-running servers
- Use .env files for sensitive values and add them to .gitignore - never hardcode passwords or encryption keys in docker-compose.yml