Zero-Downtime Deployments
Every deploy in Potions is a zero-downtime deploy. Your app keeps serving traffic throughout the entire build and release process. No maintenance windows, no dropped connections, no "please try again in a few minutes."
Blue-Green Slots
Potions achieves zero downtime using a blue-green deployment strategy. Each app has two slots - blue and green - and at any given time, one is active (serving traffic) and the other is idle.
Each slot has its own:
- Port: so both instances can run simultaneously without conflicts
-
Environment file:
.env.blueor.env.greenwith the correctPORTandRELEASE_NODE -
Systemd service:
myapp-blue.serviceormyapp-green.service -
Release directory:
/opt/potions/myapp/blue/or/opt/potions/myapp/green/ -
BEAM node name: e.g.,
myapp_blue@127.0.0.1andmyapp_green@127.0.0.1
When you trigger a deploy, Potions builds your release and starts it on the idle slot. The active slot continues serving requests the entire time. Traffic only switches after the new instance passes health checks.
The Deployment Sequence
Here's what happens step by step when you click Deploy.
1. Build
Potions clones your repository, fetches dependencies, compiles your code, and packages a Mix release. This happens on a dedicated build server - not on your production VPS. Your running app's performance isn't affected.
The compiled release is then uploaded to your server and extracted into the target slot's directory (e.g., /opt/potions/myapp/green/).
2. Configure the Slot
Potions writes an environment file for the target slot with the correct port, BEAM node name, and all of your app's environment variables. Each slot's RELEASE_NODE is unique so the two BEAM instances don't conflict when they run side by side.
3. Run Migrations
Database migrations run against your PostgreSQL database while the current instance is still serving traffic. Both the old and new instance share the same migrated database, so it's important to write backwards-compatible migrations whenever possible.
Potions calls MyApp.Release.migrate/0 via bin/<release> eval when your app has a release.ex module (the same one mix phx.gen.release generates). If your app doesn't have a release.ex, Potions falls back to iterating your configured :ecto_repos and running Ecto.Migrator directly. Migrations still work without the generator.
4. Start the New Instance
The target slot's systemd service starts. At this point, both instances are running - the old slot handling traffic through Caddy, and the new slot warming up on its own port.
5. Health Check
Before any traffic switches, Potions verifies the new instance is healthy by making HTTP requests directly to it.
If the health check fails, the new slot is stopped, the active slot is restarted as a safety measure, and the deployment is marked as failed. Your running app is not affected by a failed deploy.
6. Switch Traffic
Once health checks pass, the cutover happens:
- Potions records the new active slot in the database
- Caddy's configuration is rewritten to proxy traffic to the new slot's port
- Caddy reloads - in-flight requests complete on the old connection while new requests route to the updated instance
7. Drain the Old Instance
After the traffic switch, the old slot receives a SIGTERM signal and is given a 15-second grace period to finish any in-progress requests. After the grace period, the service is stopped completely.
The old slot's release binary stays on disk. This is what makes rollbacks fast - there's no need to rebuild.
Rollbacks
Because both slots keep their release binaries on disk, rolling back doesn't require a rebuild. When you click Rollback, Potions:
- Starts the previous slot's existing binary
- Runs health checks against it
- Switches Caddy to route traffic to the previous slot
- Drains and stops the current slot
This makes rollbacks significantly faster than a fresh deploy - seconds instead of minutes. See Triggering a Manual Deploy for how to initiate a rollback from the dashboard.
Port Assignment
When you create an app, Potions assigns two ports - one for each slot. Ports start at 4000 and increment by two for each app on the server:
| App | Blue Port | Green Port |
|---|---|---|
| First app |
4000 |
4001 |
| Second app |
4002 |
4003 |
| Third app |
4004 |
4005 |
You don't configure ports manually. Potions handles assignment, and Caddy routes traffic from ports 80/443 to the correct app port based on domain configuration.
One Deploy at a Time
Potions enforces a single active build per server. If you trigger a deploy while another is already running on the same server, the new deployment waits in a Queued status until the active build completes. This prevents resource contention and conflicting slot operations.
Things to Know
- Memory peaks briefly during the overlap window. Both the old and new instances run simultaneously between the health check and the drain. On memory-constrained servers, keep this in mind when choosing a droplet size.
- Migrations run before health checks. This means your migration must be compatible with the currently running code. Additive changes (new columns, new tables) are safe. Destructive changes (dropping columns the old code still reads) should be split across two deploys.
- If Caddy fails to update, the old slot stays alive. Potions won't stop your running instance if it can't switch traffic to the new one.
-
mix phx.gen.releaseis recommended but not required. Potions usesMyApp.Release.migrate/0when it's defined - the same function the generator'sbin/migratescript calls internally. We don't invokebin/serverorbin/migrateby name because blue-green slots need per-slot control. The effective behavior is identical to the wrappers.