Migrating From Dokku to Kamal: Setting up the Servers
This is the second post of the series "Migrating From Dokku to Kamal" and today I am gonna show you how I've set up my servers with Kamal, click here to read the first post of the series in case you've missed it.
Below you can find part of my final deploy.yml:
servers:
web:
hosts:
- <%= ENV['KAMAL_WEB_IP'] %>
labels:
traefik.http.routers.domain.rule: Host(`*.domain.com`)
traefik.http.routers.domain.entrypoints: websecure
traefik.http.routers.domain.tls.certresolver: letsencrypt
worker:
hosts:
- <%= ENV['KAMAL_WORKER_IP'] %>
cmd: bin/run-worker.sh
registry:
server: ghcr.io
username: your_github_username
password:
- KAMAL_REGISTRY_PASSWORD
env:
clear:
RAILS_ENV: production
PIDFILE: /dev/null
secret:
- RAILS_LOG_TO_STDOUT
- CLOUDFLARE_API_KEY
- S3_ACCESS_KEY_ID
- S3_SECRET_ACCESS_KEY
- S3_ENDPOINT
- S3_REGION
- POSTGRES_DATABASE
- POSTGRES_HOST
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
builder:
multiarch: false
accessories:
db:
image: postgres:16.0
host: <%= ENV['POSTGRES_HOST'] %>
port: 5432
env:
secret:
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
directories:
- data:/var/lib/postgresql/data
db_backup:
image: eeshugerman/postgres-backup-s3:16
host: <%= ENV['POSTGRES_HOST'] %>
env:
clear:
SCHEDULE: '@daily'
BACKUP_KEEP_DAYS: 10
S3_BUCKET: whatever-bucket
S3_PREFIX: backups
secret:
- S3_ACCESS_KEY_ID
- S3_SECRET_ACCESS_KEY
- S3_ENDPOINT
- S3_REGION
- POSTGRES_HOST
- POSTGRES_DATABASE
- POSTGRES_USER
- POSTGRES_PASSWORD
traefik:
options:
publish:
- "443:443"
volume:
- "/letsencrypt/acme.json:/letsencrypt/acme.json"
args:
entryPoints.web.address: ":80"
entryPoints.websecure.address: ":443"
entryPoints.web.http.redirections.entryPoint.to: websecure
entryPoints.web.http.redirections.entryPoint.scheme: https
entryPoints.web.http.redirections.entrypoint.permanent: true
entrypoints.websecure.http.tls: true
entrypoints.websecure.http.tls.domains[0].main: "domain.com"
entrypoints.websecure.http.tls.domains[0].sans: "*.domain.com"
certificatesResolvers.letsencrypt.acme.email: "user@provider.com"
certificatesResolvers.letsencrypt.acme.storage: "/letsencrypt/acme.json"
certificatesresolvers.letsencrypt.acme.dnschallenge.provider: cloudflare
env:
secret:
- CLOUDFLARE_API_KEY
clear:
CLOUDFLARE_EMAIL: 'user@provider.com'It does a couple of things:
- reads the env vars
KAMAL_WEB_IP,KAMAL_WORKER_IPandPOSTGRES_HOSTfrom the.envfile that were defined with Terraform (as shown on the other post) - defines three servers:
web,workeranddb(which is underaccessories) - sets Let's Encrypt as certificate resolver for the
webserver - overrides the command to be executed on the Docker container for the
workerserver, it'll runbin/run-worker.shinstead - sets GitHub as image registry, it's for free
- sets env vars.
PIDFILE=/dev/nulltells Rails to not save pid files as you might receive the errorA server is already running. Check /rails/tmp/pids/server.pidin case docker gets killed abruptly. This will be the default behaviour as of Rails 8. Check this post for more info. - speeds up the build time by disabling
multiarchsince both my local machine and the servers run on the arm64 architecture - sets up a container on the
dbserver to backup the Postgres database once a day using postgres-backup-s3
Resolving Let's Encrypt ACME v2 challenge
As my domain has Cloudflare as DNS resolver and I wanted to support wildcard certificates - so I could test my app at kamal.domain.com before switching from Digital Ocean to Hetzner on domain.com, the web server is solving Let's Encrypt ACME v2 challenge through DNS (thanks Nick for sharing it), that's why I had to define the extra args for traefik (including CLOUDFLARE_EMAIL and CLOUDFLARE_API_KEY).
Note we are storing the certificate on the file we created with Terraform: /letsencrypt/acme.json. This way we don't need to regenerate a new certificate every time a new web container is run.
That's it for today, in the last post of the series I share how I've set up cron to run scheduled tasks.