Rails 8 is out. And with it Kamal 2, the new default way of deploying Rails apps.
For those unfamiliar, Kamal is a tool that puts your Rails app in a Docker container, which makes it easier to deploy it anywhere. But it comes with its own learning curve.
I personally found Kamal hard for the uninitiated. Many things were missing in the official docs, and I didn’t find any good tutorial on how to get a full whole Rails app in production using Kamal 2 – from scratch to fully in production, accounting for all the moving pieces a Rails app usually needs.
For example – there’s no good default example or guide on how to set up a Postgres database container for your Rails app to use. You need to dig deep into the Kamal docs and know very well your way around Docker to figure things out. There’s a decent amount of undocumented features too, and even unimplemented behavior that’s present in the docs but doesn’t acutally work in the real world (like the SSH keys array configuration option, which is documented but not implemented – just try it out)
It turns out configuring everything to run even a simple Rails app in production is non-trivial. In the past few weeks I managed to migrate a few of my production apps to Rails 8 and Kamal 2, and I spent a bunch of hours figuring all this out. I’m writing everything down in this blogpost so you can learn from my mistakes and avoid spending all that time figuring things out on your own.
Harden your host instance
Kamal claims that by running kamal setup
it “runs everything required to deploy an application to a fresh host”, but it really doesn’t.
Kamal just installs Docker.
It installs Docker only if it was missing in the host machine, when running kamal setup
. That’s it.
I managed to shoot myself in the foot and get my production Docker server hacked – mainly because I was unfamiliar with Docker and I changed the wrong config, but also because Kamal doesn’t really harden your instance and fails to make this clear in their messaging.
I think hardening is a crucial step. You can’t really have a production server running and sleep well at night if things aren’t properly hardened. Real production traffic from the internet is full of automated attacks and probing / testing malicious requests, and you better protect yourself against it. At the very least, you need to set up a firewall to block all ports except the absolutely necessary for your app to work; and fail2ban so people can’t just brute force their way into the machine.
I made a comprehensive script to thoroughly harden a fresh Ubuntu Server instance and make it fully ready to run production Docker containers.
You can ⬇️ download it from my GitHub Gist. Just ssh
into your new Ubuntu server wget
it, chmod +x
it and run it.
Note: it’s good practice to always check scripts you find on the internet before you run them. Make sure they don’t do things you don’t want them to do. For example, ask Claude / ChatGPT to review and explain my script for you.
Configure a Docker image registry
With Kamal, you now need a Docker registry to upload and pull your images from. You can’t just deploy your code straight to the server: it now has to be built into a Docker image, that Docker image needs to be uploaded to a remote container registry, then that same image needs to be downloaded from the container registry into the target production server. There’s rumors that Kamal will allow you to easily make your deployment server also work as a Docker registry in the future to avoid this external dependency, but for the time being, using a separate registry is the easiest – and the default.
I shopped around and considered a bunch of options, like GitHub (ghcr.io), Docker’s own container registry (Docker Hub), the DigitalOcean Container Registry, and even setting up your custom container registry using Cloudflare R2. The problem is if you need to keep your Docker images private, and deploy often to production (as you should), each service varies wildly in thresholds for the free tier / pricing for the private tier.
I found AWS Elastic Container Registry (ECR) was easiest and cheapest option (ironically, because I left the cloud last month)
The first thing you want to do is create an AWS account if you hadn’t already, configure billing and all that, and configure the aws
CLI.
Then, on ECR, create an actual registry for each project you want to deploy with Kamal. You can name your ECR repositories anything you like, I chose to name mines following the typical Docker Hub naming convention username/projectname
While you’re still on your AWS ECR repo, set up a lifecycle policy rule under Repositories > Lifecycle Policy
so only the most ~10-20 recent images are kept (~5GiB) and the old ones get auto deleted, so you don’t incur in big storage costs by keeping every single image you build.
ECR costs $1 per 10GiB; each Kamal image is roughly ~250Mb, so if you’re pushing to production twice a day, you’re increasing your total ECR storage by 1GiB every 4 days, so you’re roughly increasing your ECR costs by $1/mo every 40 days. It’s not much but it’s a waste to have stale images taking up space in your registry.
Now, there are also charges for data transfer.
In particular, AWS ECR charges $0.09 per GiB transferred OUT of the registry, if your repository is private and you’re deploying the images to servers outside of the AWS network (like Hetzner, for example). In the case you’re deploying twice a day, you’re transferring out 250Mb * 2 times/day * 30 days/month * $0.09 / GiB = ~15Gib / month * $0.09 / GiB = $1.35/mo
So you’d expect to pay around $2.35/mo for the registry and associated data transfers, PER repository, if you’re deploying to production twice a day every day and you have a private Docker registry.
Configure Kamal secrets
To allow Kamal to access the Docker registry, you’ll need to configure the right access secrets on the .kamal/secrets
file.
This is also partially why I found AWS ECR the easiest to set up. I can just write this on my secrets file:
# .kamal/secrets
AWS_ECR_PASSWORD=$(aws ecr get-login-password --region us-east-1 --profile default)
and that’s enough for Kamal to access my ECR registry on production, provided I also set it right in deploy.yml, like this:
# config/deploy.yml
password:
- AWS_ECR_PASSWORD
If I were using something else, like Docker Hub or the GitHub Container Registry, I’d have to provide Kamal with access to my whole password manager, in my case using Kamal’s Bitwarden adapter, which would require me to:
- Download and install the Bitwarden CLI tool
- Configure the Bitwarden CLI with my credentials and leave it logged in
- Unlock my Bitwarden vault in the CLI
- Make sure my Bitwarden vault is always unlocked in the CLI every time before deploying with Kamal
- (!!!) Expose my whole Bitwarden account’s master email in plaintext by commiting it to GitHub:
# You'd need to commit this line when using a password manager like Bitwarden to store Kamal secrets:
kamal secrets fetch --adapter bitwarden --account [email protected] REGISTRY_PASSWORD DB_PASSWORD
I found AWS not only cheaper, but also nicer in the way it shares secrets with Kamal because I can just use the aws
CLI and avoid leaking my Bitwarden master admin email.
Run PostgreSQL in the same server as your Rails app
To run PostgreSQL in the same server as your Rails app, you’ll need to configure a postgres accesory and set up the right Postgres config across your Rails app.
This is not trivial.
To begin with, define your production postgres accessory in Kamal’s deploy.yml
file:
# config/deploy.yml
# First, define the right database ENV variables in the env / clear section
env:
clear:
DB_HOST: servicename-postgres
POSTGRES_USER: appname
POSTGRES_DB: appname_production
DB_PORT: 5432 # use 5432 regardless of what `localhost` port you expose below
# Then, configure the `postgres` accessory
accessories:
postgres:
image: postgres:15
host: x.x.x.x
port: "127.0.0.1:5432:5432" # Change this for multiple Postgres containers in the same machine
env:
clear:
POSTGRES_USER: "${POSTGRES_USER}"
POSTGRES_DB: "${POSTGRES_DB}"
secret:
- POSTGRES_PASSWORD
directories:
- data:/var/lib/postgresql/data
files:
- db/production_setup.sql:/docker-entrypoint-initdb.d/setup.sql
Few things going on here:
- We define an accessory named
postgres
(this name is important, we’ll use it later) - We use the
postgres:15
Docker image for the database, which will get pulled from the Docker registry. The reason why we use the15
tag and not the latest version (17 at the time of writing this blogpost) or thealpine
version is deliberate. PostgreSQL 15 is the most mature, stable and long-term supported PostgreSQL version as of today, with extensions and tools thoroughly tested against 15, and the alpine versions, despite having a smaller footprint, are not as well suited for production use because they may not fully support some extensions, may lack security updates, debugging tools, etc. - We do not expose the PostgreSQL database to the internet. We achieve this by linking the server’s internal / localhost 5432 port (
127.0.0.1:5432
) to the Postgres’ Docker image’s own5432
port. This way, as far as the server is concerned, the Postgres server is only running onlocalhost:5432
, and not exposed to the internet (which would be achieved by writing5432:5432
instead)- In fact, if you want to have multiple Rails apps running in the same server, each of them with their own PostgreSQL Docker image, you can just change the internal port to something else like
5433
or5434
, while still routing it to the Docker image5432
port, like this:127.0.0.1:5433:5432
. We do this just to avoid multiple containers colliding in port 5432 in the host machine – but in yourdeploy.yml
file, in the env section, you’ll still define theDB_PORT
as5432
, because the communication is happening inside the Docker network through theservicename-postgres
name (Inside the Docker network, containers always use the internal port,5432
) This way you can run multiple PostgreSQL databases, each in their own Docker container defined in their own separate Kamal accessory, all of them running in the same server without colliding DB ports.
- In fact, if you want to have multiple Rails apps running in the same server, each of them with their own PostgreSQL Docker image, you can just change the internal port to something else like
- We define a few
env
variables to hook up the database to the Rails app:POSTGRES_USER
andPOSTGRES_DB
should contain your app name as Rails usually uses it when creating databases. They usually look likeappname_development
,appname_production
, etc. (if you’re unsure, check out what your development database name is)DB_HOST
gets set to a special name that Kamal builds for you. It’s the name that your database Docker container will get in production. Let me explain. The name has two parts:servicename
andpostgres
. Let’s start withservicename
: in Kamal’s deploy.rb file, in the first few lines, you define aservice
name. I usually set it to the same name as my app (appname
), but it may be different. This name, whatever it is, let’s call itservicename
, then gets used to name the Docker containers running in the production machine. In our case, we’re defining a PostgreSQL accessory namedpostgres
, so the Docker container that will run in production will be namedservicename-postgres
. If you name your service or PostgreSQL accessory differently, you’ll need to adjust this.DB_PORT
gets set to the server’s localhost port hooked up to the postgres Docker container’s port. Remember when I said you can use port 5433, 5434 or any other port instead of the default 5432 if you want to run multiple PostgreSQL containers in the same machine? This is where you need to also modify it.POSTGRES_PASSWORD
gets passed here as env variable, but it’s defined and set in the.kamal/secrets
file. You need to add it likePOSTGRES_PASSWORD=$(cat config/db.key)
or read its value from a password manager like Bitwarden, as discussed before. If you decide to save it to disk, make sure to add theconfig/db.key
file to your.gitignore
so you don’t commit it to GitHub, and know that it’s less secure than storing it in a password manager like 1password or Bitwarden.
- To persist database data across deployments, we route the PostgreSQL to the host machine filesystem with
data:/var/lib/postgresql/data
- We need to define a SQL script that will be run to set up the production database (only when creating the database). This is defined as
db/production_setup.sql:/docker-entrypoint-initdb.d/setup.sql
— which means we need to create adb/production_setup.sql
file in our Rails project, like this:
-- db/production_setup.sql
CREATE DATABASE appname_production;
CREATE DATABASE appname_production_cache;
CREATE DATABASE appname_production_queue;
CREATE DATABASE appname_production_cable;
To finish, we need to actually use all these environment variables by linking them to the Rails’ app database config in config/database.yml
:
# config/database.yml
production:
primary: &primary_production
<<: *default
username: <%= ENV["POSTGRES_USER"] %>
database: <%= ENV["POSTGRES_DB"] %>
password: <%= ENV["POSTGRES_PASSWORD"] %>
host: <%= ENV["DB_HOST"] %>
port: <%= ENV["DB_PORT"] %>
Of course, instead of deploying a PostgreSQL accessory with your Kamal app, you can also use a managed database, like the Digital Ocean PostgreSQL hosting, AWS RDS, or just host a PostgreSQL server yourself (as I outlined here) – in that case, just set the corresponding database URL and credentials in the .kamal/secrets
, config/deploy.yml
, and config/database.yml
files, and do not define a Postgres accessory in Kamal’s deploy config.
Setting up a remote builder machine
Sometimes you don’t want your own development machine to build the Docker images that you’ll use to deploy your Rails app. Why? Because you want a beefy machine to build your images faster and reduce deployment time, or simply because your development machine can’t build for the server’s target architecture.
Setting up a remote machine to build your Docker images is fairly simple. Just use the same hardening script we used to set up the server Docker host machine, and then set up the remote machine in Kamal’s deploy.yml
file, like this:
# config/deploy.yml
builder:
arch: amd64
local: false
remote: ssh://[email protected]
args:
RUBY_VERSION: 3.3.5
secrets:
- AWS_ECR_PASSWORD
- RAILS_MASTER_KEY
You don’t need to configure the remote build server’s Docker daemon for remote access. This means you don’t need to configure Docker in any specific way to expose Docker endpoints publicly, whether authenticated or not. That’s exactly what I got wrong, and how I got hacked. Just use the exact same hardening script as your Docker host server, and Kamal will just connect to the remote build server using just ssh – no Docker endpoints needed.
Make sure to specify all the correct stuff in deploy.yml
:
amd64
is the target server architecturessh://[email protected]
is the SSH access to the builder server, wheredocker
is the user Kamal will use to log in via ssh to the machine (if you configured your server differently, it may beubuntu
,root
or something else), andy.y.y.y
is the IP address of the remote build server. Needless to say, your local development computer has to have the right ssh key configured and loaded to make the SSH connection to the remote build server.AWS_ECR_PASSWORD
andRAILS_MASTER_KEY
are environment variables defined in the.kamal/secrets
file, as explained above, that will get passed to the remote build server and built Docker image, like the AWS ECR token for pushing the Docker image to the container registry
Build and deploy your app and accessories
To deploy our Rails app with Kamal, we first need to deploy any accessories, like the PostgreSQL Docker image we’ve just configured, or else the Rails app will fail to boot because the database will not be found.
To deploy all Kamal accessories, run:
kamal accessory boot all
If you only want to deploy the PostgreSQL accessory, named postgres
in our case, run:
kamal accessory boot postgres
If for whatever reason an accessory stops working and you need to reboot it, run:
kamal accessory reboot postgres
And if you want to remove the accessory to install it anew or because you no longer need it:
kamal accessory remove postgres
To verify all is working good, you can ssh
into the server and run docker ps
, it will list your Docker containers running in the host machine. If you only booted the Postgres accessory, it should show:
- A
postgres:15
image running with nameservicename-postgres
You can now deploy the full Rails app with Kamal by running:
kamal deploy
If everything went right, you should now see 3 containers running in the Docker host server:
- A
postgres:15
image running with nameservicename-postgres
(already deployed above withkamal accessory boot postgres
) - An image containing your Rails app, running with name
appname-web-0f123
, where0f123
is a long hash identifying your app version - A
basecamp/kamal-proxy
image running with namekamal-proxy
, which is what routes traffic to your app
And if you got here, you should already have a fully deployed Rails 8 app with a dockerized PostgreSQL database using Kamal!
How to operate your production Rails app with Kamal
Now that you have your Rails app up and running, you’ll probably need to operate it in production. Some things you’ll want to do are: check data and change data in the production database, ssh into the Rails Docker image to run Linux commands, and see production logs. Kamal 2 provides easy shortcuts for each of these tasks. Let’s go one by one:
How to run a Rails console with Kamal
This is an easy one. Just run:
kamal console
And you should get an interactive Rails console hooked up to your production database.
How to open a psql console in the Kamal PostgreSQL accessory container
Sometimes you want to execute some SQL directly from a psql client. In that case, we need to access the production postgres accesory container and execute an interactive psql console inside it. For that, run:
kamal dbc
This is a shortcut for kamal app exec --interactive --reuse "bin/rails dbconsole"
, but you can also launch an interactive psql
console for your accessory container with this long-form command:
kamal accessory exec postgres -i "psql -h servicename-postgres -p 5432 -U appname appname_production"
This will prompt you to input your production database password. After that, you’ll get a fully interactive psql console where you can run your usual PostgreSQL commands and SQL queries.
How to get an interactive console inside your Rails app Kamal container
To open up an interactive Linux console inside the Docker image running your Rails app, execute:
kamal shell
Which is just an alias for the longer command:
kamal app exec -i bash
And you’ll get an interactive shell where you’ll be able to run any Linux command. Just remember that any change you make to the Linux system executing your Rails app won’t be preserved across deployments – if you want that, you should make changes in the Rails app’s Dockerfile instead so the Docker image gets built with your new instructions.
How to get an interactive console inside your Postgres accessory container
Just run:
kamal accessory exec postgres --interactive --reuse "bash"
And you’ll get an interactive console to the Postgresql container.
How to check production logs on your Rails app using Kamal
If you want the equivalent of navigating to the logs folder and running tail -f production.log
, just run the handy Kamal alias:
kamal logs
And you’ll get a stream of changes in the production log file.
How to get an interactive console to any Docker image inside the server
This may be obvious for those already familiar with Docker, but it was not for me – I’ve been using Capistrano for years and I’m still getting used to Docker containers.
If you’re already ssh
‘d into your Docker host server, you can launch a shell console inside any container with:
docker exec -it -u DOCKER_CONTAINER_LINUX_USERNAME DOCKER_CONTAINER_NAME /bin/bash
Where DOCKER_CONTAINER_NAME
is the name the container has when you run docker ps
, and DOCKER_CONTAINER_LINUX_USERNAME
is the Linux user inside the container that will be used to launch the console.
How to fix the Kamal error: `/cable` not found
If your Rails app is using the Devise gem, chances are you’ll get a cable
-related error when deploying it with Kamal. The error looks something like:
Firefox can’t establish a connection to the server at wss://example.com/cable.
wss: ActionController::RoutingError (No route matches "cable")
This error stumped me for a good while, but the solution is as easy as setting this in your config/initializers/devise.rb
file:
# config/initializers/devise.rb
config.reload_routes = false
The error is well documented in this GitHub issue.
What’s next
Now that your Rails app is running in production using Kamal, there’s a few things you could look into:
- Adding good error reporting and logging (using papertrail or similar) – so you log and/or get notified of errors
- Monitoring your server (memory usage, CPU, disk, etc.) – I use my gem allgood for that, you could add something more advanced
- Set up database backups and real-time replicas: backing up your database regularly, ideally in a location different than your main production database, and ideally as a real-time PostgreSQL replica, is something that you should really look into, at the very least at a basic level, to ensure no data loss or minimize data loss in the case of a failure or catastrophe
If you have good ideas on how to do any of the above, please do write a comment below! I do not know what’s the best way of doing many of these things and could use some help.
P.S.: Follow me on Twitter to stay in the loop. I'm writing a book called Bold Hackers on making successful digital products as an indie hacker. Read other stories I've written. Subscribe below to get an alert when I publish a new post: