App Deployment (PaaS)
GitBlixt includes a built-in platform-as-a-service that deploys your repositories as running applications with automatic builds, zero-downtime rolling updates, and SSL. This feature is optional — the git forge works normally without it — and requires Caddy as a reverse proxy.
This guide walks through every step: setting up the infrastructure, enabling deployment on a repository, managing environments and environment variables, monitoring logs, opening a shell into a running container, and administering the platform as a whole.
Prerequisites
-
Docker Swarm initialized on the host
(the default installer handles this, or run
docker swarm init) -
Caddy
reverse proxy running and connected to the
proxyDocker overlay network
If Caddy is not running, the Deployment section in repository settings will show a warning with setup instructions. Everything else in GitBlixt continues to work.
Setting Up Caddy
Caddy is included in the default docker-compose.yml. You need to create a
Caddyfile with your domain and the required admin API setting, create two overlay
networks, and start the stack.
1. Create the Caddyfile
Create caddy/Caddyfile with this content:
{
# Required — allows GitBlixt to manage PaaS routes.
admin 0.0.0.0:2019
}
yourdomain.com {
reverse_proxy gitblixt:4000
}
Replace yourdomain.com with your actual domain. Caddy will auto-provision
HTTPS via Let's Encrypt. The admin directive is mandatory — without it,
GitBlixt cannot register routes for deployed apps.
2. Create the overlay networks
docker network create --driver overlay --attachable gitblixt
docker network create --driver overlay --attachable proxy
Both networks are required. The gitblixt network provides Postgres
connectivity, and the proxy network lets GitBlixt reach the Caddy admin
API. The GitBlixt Swarm service must be attached to both.
3. Start the stack
docker compose -p gitblixt up -d
Caddy will start with the admin API listening on all interfaces. GitBlixt reaches it at
http://caddy:2019
via Docker overlay DNS.
Alternative: Deploy Caddy from the Admin Panel
If you prefer not to set up Caddy manually, an admin can deploy it from the
Admin → Infrastructure
page at /admin/infrastructure.
The Caddy
card shows the current status. If Caddy is not running, click Deploy Caddy
— GitBlixt will create the Docker service, write a
bootstrap Caddyfile, attach the required networks, and wait for the admin API to come up.
This is equivalent to the manual steps above.
Enabling Deployment on a Repository
Deployment is configured per-repository from the repository settings page.
- Go to your repository's Settings page and scroll to the Deployment section.
- Toggle Deploy this app on. If Caddy is not reachable, a warning will appear explaining why the toggle is unavailable and linking to these docs.
- Click Save deployment settings.
When you enable deployment, GitBlixt automatically creates a production environment for the repository. You can then configure that environment or create additional ones (staging, preview, etc.) from the Environments page.
To disable deployment later, toggle it off and save. GitBlixt will scale all Docker services to zero, remove Caddy routes, stop log streamers, and set all environments to "stopped". No data is deleted — you can re-enable at any time.
Environments
Each deployed repository can have multiple environments. An environment maps a git branch to a running Docker service with its own domain(s), port, replicas, and environment variables.
Access the environments page from your repository's navigation bar under Environments, or from the "Configure environments and deploy" link in the Deployment settings section.
Creating an Environment
The production environment is created automatically when you enable deployment. To create additional environments (e.g. staging):
- Click + New environment on the Environments page.
-
Fill in the form:
- Name — lowercase letters, digits, and hyphens. This becomes part of the Docker service name.
- Branch — select from the repository's branches. Pushes to this branch will auto-deploy to this environment (if auto-deploy is enabled). Leave blank to use the repository's default branch.
- Domains — comma-separated list of domains. SSL is provisioned automatically by Caddy for each domain.
-
Port — the port your application listens on inside the
container. Default:
4000. - Replicas — number of container replicas for this environment (1–10). Docker Swarm load-balances across replicas.
-
Health check path — the HTTP path to probe during deploy.
Default:
/health. - Auto-deploy on push — when checked, a successful CI pipeline on the configured branch automatically triggers a deploy. Checked by default.
-
Provision a Postgres database — creates a dedicated database
on the shared Postgres instance and injects
DATABASE_URLautomatically.
- Click Create environment.
Environment Statuses
Each environment has one of these statuses:
- Active — the service is running and healthy
- Deploying — a build or deploy is in progress
- Stopped — the service has been stopped or has not been deployed yet
- Expired — a preview environment that exceeded its TTL
Configuring an Environment
Click an environment's name on the Environments page to open its detail page. From here you can:
- Change the branch, domains, port, replicas, and health check path
- Enable or disable auto-deploy
- View the Docker service name
- See the last deployed commit SHA and timestamp
- Deploy now — trigger a manual deploy of the latest commit on the configured branch, without waiting for a push
-
Clear build cache
— forces the next Docker build to run with
--no-cache, discarding cached layers. Useful when stale layers cause build issues.
Deleting an Environment
Permanent environments can be deleted from the Environments index page using the delete button on each environment card. The production environment cannot be deleted from this page — to fully tear down deployment, disable it in the repository settings.
The Deploy Pipeline
When a deploy is triggered (either automatically via push or manually from the environment page), GitBlixt runs the following steps:
- Push to branch — triggers a CI pipeline
- CI passes — a deploy job is automatically enqueued
-
Dockerfile detection
— uses your
Dockerfile,Dockerfile.build, orDockerfile.prodif present. Otherwise, auto-generates one for Elixir/Phoenix (detected viamix.exs) or Rails (detected viaGemfile) projects. - Build — builds a Docker image tagged with the commit SHA
- Smoke test — starts the container and probes the health check endpoint. The new version is not promoted until this passes.
-
Migrations
— runs
/app/bin/migrateif a managed database is enabled (Elixir apps only) - Deploy — creates or updates the Docker Swarm service with a rolling update (start-first order, automatic rollback on failure)
- Caddy route — registers the domain(s) in Caddy for automatic HTTPS
The environment detail page shows the Latest Deploy section with a live-updating log while the deploy is in progress. You can see the status (pending/running/success/failed), start and finish timestamps, and the full build output.
Dockerfile Mode
Each environment tracks whether it is using a custom Dockerfile (from your repository) or an auto-generated one. The current mode is shown on the environment detail page next to the deploy log.
If a deploy fails with a custom Dockerfile, the environment page offers a Retry with generated Dockerfile button that switches to auto mode and re-triggers the deploy. You can switch back to custom mode at any time.
Environment Variables
Environment variables are encrypted at rest with AES-256-GCM and injected into the
container at deploy time. Keys must be uppercase with underscores
(e.g. SECRET_KEY_BASE).
Variables are managed on the environment detail page and use a three-layer precedence system:
- CI/CD variables (lowest priority) — set in Repository Settings → CI/CD Variables. These are primarily intended for build and test pipelines, but are visible as read-only shared variables on the environment page. They cannot be edited from the environment page.
- Shared variables (medium priority) — set on the environment detail page under "Shared variables". These are shared across all environments for the repository. If a shared variable has the same key as a CI/CD variable, the shared variable takes precedence.
- Environment-specific variables (highest priority) — set on the environment detail page under "Environment variables". These override both shared and CI/CD variables with the same key.
For each variable, you can:
- Reveal — decrypt and show the value (authorized users only)
- Edit — update the value inline
- Delete — remove the variable
Auto-Detected Variables
GitBlixt scans your repository's configuration files for environment variables that
your app expects (e.g. from config/runtime.exs
or .env.example).
Detected variables that are not yet configured appear in a Detected variables
section with suggested default values. You can
set them with one click or dismiss them permanently if they are not needed.
Managed Database
Check Provision a Postgres database when creating an environment (or enable it later) to have GitBlixt automatically create a dedicated Postgres database for your app. GitBlixt will:
- Create a new database and user on the shared Postgres instance
-
Store the
DATABASE_URLas an encrypted environment variable - Inject it automatically at deploy time
If you do not enable managed database and do not have a DATABASE_URL in
your environment variables, the environment page will show an informational notice
explaining the auto-provisioning option.
To remove a managed database, click Remove database
on the environment
detail page. This drops the database, removes the user, and deletes the
DATABASE_URL
environment variable.
Database Performance Monitoring
If your environment has a DATABASE_URL
configured (whether managed or
externally provided), the environment detail page shows a Database Performance
section with:
- Database size, connection count (active/idle/total), and cache hit ratio
- Active queries — currently running queries with PID, user, state, duration, and query text
-
Slow queries — mean execution time, call count, total time (requires
pg_stat_statements)
Click Refresh to reload the stats. If the database connection fails, a warning is shown with the specific error.
Health Check
The deploy pipeline runs a smoke test against your app's health endpoint before
promoting the new version. Configure the path in the environment settings
(default: /health). If your app doesn't have a dedicated health endpoint,
set it to / — any HTTP 2xx response will pass.
Logs
Each environment has a dedicated logs page accessible from the environment card or detail page via the Logs link.
The log viewer has three modes:
- Live — tails the container's stdout/stderr in real time. New lines appear as they are emitted. GitBlixt starts a log streamer for the environment's Docker service automatically when you switch to live mode.
- Recent — shows a snapshot of the most recent 200 log lines from the database. Scroll up to lazy-load older entries (200 at a time).
- Search — full-text search across stored log lines with filters for HTTP status codes (2xx–5xx) and time range (15 minutes to 7 days, or all time). Returns up to 500 matching lines.
Logs are batched and persisted to the database every 5 seconds. The live view buffers up to 10,000 lines in the browser.
Web Shell
For active environments, you can open an interactive terminal inside the running container. Click the Shell link on the environment detail page or the environments index.
Access Control
Shell access is restricted to:
- Admin users (always have access)
- Repository owners (for personal repositories)
- Organization owners and maintainers (for organization repositories)
If you do not have access, the shell page shows an "Unauthorized" message. If the environment is not active, a warning shows the current status.
Using the Shell
Click Open Shell to connect. The terminal uses xterm.js and supports:
- Full interactive terminal with color, cursor positioning, and tab completion
- Ctrl+Shift+C to copy selected text
- Ctrl+Shift+V to paste from clipboard
- Automatic resize when the browser window changes
- Clickable URLs in terminal output
Sessions have a 30-minute idle timeout. There is a maximum of 5 concurrent shell sessions per environment. You can disconnect and reconnect at any time using the buttons in the terminal header.
For Elixir/Phoenix apps, you can run bin/myapp remote to get an IEx
console connected to the running application. See the troubleshooting section below
if the remote shell hangs.
Supported Frameworks
If your repository has a Dockerfile
(or Dockerfile.build
or Dockerfile.prod), it will be used as-is. Otherwise,
GitBlixt auto-detects the framework and generates one:
-
Elixir/Phoenix
— detected via
mix.exs. Multi-stage build with hex, rebar, esbuild, and release. -
Ruby on Rails
— detected via
Gemfile. Bundled with asset precompilation.
For other frameworks, add a Dockerfile to your repository root.
Sample Dockerfile: Elixir/Phoenix
FROM elixir:1.19-otp-28-slim AS builder
RUN apt-get update \\
&& apt-get install -y --no-install-recommends \\
build-essential git ca-certificates curl \\
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
ENV MIX_ENV=prod
RUN mix local.hex --force && mix local.rebar --force
COPY mix.exs mix.lock ./
RUN mix deps.get --only prod
RUN mkdir config
COPY config/config.exs config/prod.exs config/
RUN mix deps.compile
COPY priv priv
COPY lib lib
RUN mix compile
COPY assets assets
RUN mix assets.setup
RUN mix assets.deploy
COPY config/runtime.exs config/
COPY rel rel
RUN mix release --overwrite
FROM elixir:1.19-otp-28-slim AS final
RUN apt-get update \\
&& apt-get install -y --no-install-recommends \\
locales ca-certificates curl \\
&& sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen \\
&& locale-gen \\
&& rm -rf /var/lib/apt/lists/*
ENV LANG=en_US.UTF-8 \\
LANGUAGE=en_US:en \\
LC_ALL=en_US.UTF-8 \\
MIX_ENV=prod \\
PHX_SERVER=true \\
PORT=4000
WORKDIR /app
COPY --from=builder /app/_build/prod/rel/myapp ./
HEALTHCHECK --interval=10s --timeout=5s --start-period=45s --retries=3 \\
CMD curl -sf http://localhost:${PORT}/health || exit 1
EXPOSE 4000
CMD ["/app/bin/myapp", "start"]
Replace myapp
with your application name (the one in mix.exs).
The runner stage uses the full Elixir slim image rather than a bare Debian image so that
ERTS and the tty
group are properly configured — this is required for bin/myapp remote
to work in the web shell.
Sample Dockerfile: Ruby on Rails
FROM ruby:3.3-slim AS builder
RUN apt-get update \\
&& apt-get install -y --no-install-recommends \\
build-essential git libpq-dev nodejs npm \\
&& npm install -g yarn \\
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
ENV RAILS_ENV=production \\
BUNDLE_WITHOUT=development:test
COPY Gemfile Gemfile.lock ./
RUN bundle install --jobs 4 --retry 3
COPY . .
RUN bundle exec rails assets:precompile
FROM ruby:3.3-slim AS final
RUN apt-get update \\
&& apt-get install -y --no-install-recommends \\
libpq5 curl \\
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
ENV RAILS_ENV=production \\
RAILS_SERVE_STATIC_FILES=true \\
PORT=3000
COPY --from=builder /app /app
COPY --from=builder /usr/local/bundle /usr/local/bundle
HEALTHCHECK --interval=10s --timeout=5s --start-period=30s --retries=3 \\
CMD curl -sf http://localhost:${PORT}/up || exit 1
EXPOSE 3000
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
Set the container port to 3000
and the health check path to /up
(the Rails default) in your environment settings.
Preview Deployments
When you open a merge request, you can deploy a temporary preview environment.
Previews are automatically assigned a subdomain based on the production environment's
domain (e.g. pr-1.yourdomain.com) and can be configured to auto-expire.
DNS Setup for Preview Subdomains
For preview subdomains to work, you need a wildcard DNS record pointing
to your server. This is a standard A record with * as the subdomain:
*.yourdomain.com A YOUR_SERVER_IP
This single record makes all subdomains (like pr-1.yourdomain.com, pr-2.yourdomain.com, etc.) resolve to your server. Caddy handles
HTTPS certificates automatically for each subdomain.
If you use a DNS provider like Cloudflare, add the wildcard record in their dashboard:
- Type: A
- Name: *
- Content: your server's IP address
- Proxy status: DNS only (disable Cloudflare proxy for wildcard certs)
Note: A wildcard record does not override explicit records.
If you have mx.yourdomain.com pointing to a mail provider,
the wildcard won't affect it — DNS always prefers the more specific record.
Without a wildcard DNS record, preview subdomains won't resolve and the preview will only be accessible via its internal container IP.
Preview Lifecycle
- Previews inherit environment variables and configuration from the source environment
- Each MR gets one preview at a time — re-deploying replaces the existing one
- Previews can be set to auto-expire after a configurable TTL
- Previews are automatically cleaned up when the MR is merged or closed
- You can manually stop a preview at any time from the MR page or the environments page
Managing Previews
Preview environments appear in a separate section on the Environments index page. Each preview shows its name, the associated MR number, domain links, expiration time, and status. You can stop a preview using the Stop button.
Admin: Infrastructure Dashboard
Admins have access to an infrastructure dashboard at
Admin → Infrastructure
(/admin/infrastructure) that
provides visibility and control over the deployment platform.
Caddy Management
The Caddy card on the infrastructure page shows:
- Status — Running (with version and uptime), Admin API unreachable, Not deployed, or Unknown
- Route count — total registered routes
- Orphan count — routes that no longer map to an active environment (highlighted as a warning)
Available actions:
- Deploy Caddy — creates the Caddy Docker service from scratch (only shown when Caddy is not running)
- Restart Caddy — restarts the running Caddy service
-
View routes
— opens a modal listing all Caddy routes with
their domain(s), upstream service and port, and status (Linked, Orphaned, System,
or Manual). From this modal you can:
- Delete individual routes
- Remove all orphaned routes in bulk
- Mark routes as "system" to protect them from orphan cleanup (useful for manually added routes that are not managed by GitBlixt)
Other Infrastructure Sections
The infrastructure dashboard also shows system-wide metrics that are relevant to deployed apps:
- GitBlixt metrics — includes counts of active/deploying/stopped/expired environments, active log streamers, and shell sessions
- PostgreSQL — database size, connections, cache hit ratio, long-running queries, slow queries, and biggest tables
-
Postgres management — restart the Postgres Docker container
if needed (useful after configuration changes like enabling
pg_stat_statements)
The dashboard auto-refreshes every 3 seconds for fast-moving metrics (BEAM, host, GitBlixt) and supports manual refresh for heavier queries (Postgres, Caddy). You can pause auto-refresh with the toggle in the header.
Admin: Apps Overview
Admins can also view all deployed applications from Admin → Apps
(/admin/apps). This provides a
cross-repository view of every deployed app with status, logs, and actions
(restart, stop, redeploy).
Troubleshooting
The "Deploy this app" toggle is disabled
This means GitBlixt cannot reach the Caddy admin API. Either Caddy is not running or
is not on the proxy network. Check:
- Admin → Infrastructure — the Caddy card shows the current status. Use Deploy Caddy if it is not running.
-
Verify the overlay networks exist:
docker network ls | grep -E "gitblixt|proxy" -
Verify the Caddy container is on the proxy network:
docker inspect caddy --format '{{json .NetworkSettings.Networks}}'
Deploy fails at the smoke test
-
Check that the health check path is correct. The default is
/health, but your app may use/,/up, or something else. - Check the deploy log on the environment page for the specific error.
- Make sure the container port matches what your app actually listens on.
Deploy fails with a custom Dockerfile
If your Dockerfile has issues, the environment page shows a warning with a Retry with generated Dockerfile button. This switches to auto mode and re-runs the deploy. You can switch back to your custom Dockerfile once the issue is fixed.
Domain not reachable after deploy
-
Verify DNS points to your server:
dig +short yourdomain.com - Check Admin → Infrastructure → View routes to confirm the route was registered in Caddy
- Caddy provisions SSL on first request — the first load may take a few seconds
Environment variables not taking effect
Environment variables are injected at deploy time, not at runtime. After adding or changing variables, you must redeploy the environment using the Deploy now button.
Check the variable precedence: environment-specific variables override shared variables, which override CI/CD variables. If you set a key at multiple levels, the highest-priority one wins.
Elixir/OTP: Remote Shell Hanging in Docker
bin/myapp remote hangs in the web shell, stale
ERL_FLAGS
in your Docker image are almost certainly the cause.
The Problem: ERL_FLAGS and --noinput
Docker and the Elixir build toolchain can leave ERL_FLAGS
set in
the container environment. If ERL_FLAGS
contains --noinput,
the BEAM starts with no terminal input — which is correct for the production server
(bin/myapp start) but completely breaks the interactive remote console
(bin/myapp remote). The remote console opens, :prim_tty
sees no input device, and the process hangs with zero output.
This is especially insidious because ERL_FLAGS can leak from the Docker
build stage into cached layers. Even if you don't set it yourself, a dependency or
build step might, and Docker layer caching preserves it silently.
The Fix
Add a rel/env.sh.eex file to your project that unsets ERL_FLAGS:
#!/bin/sh
unset ERL_FLAGS
This file is sourced by the Elixir release script before starting the BEAM. It ensures that no stale flags from the build environment affect the runtime, regardless of what's cached in the Docker image.
Other Flags to Watch
If you have a custom rel/vm.args.eex, also avoid +sbwtdio none
(disables busy-waiting on dirty I/O schedulers). OTP 28's :prim_tty
runs
on dirty I/O schedulers, and combined with +sfwi 500, the scheduler can
be too slow to process the terminal handshake:
# Safe:
+sbwt none
+sbwtdcpu none
+sfwi 500
# NOT safe — breaks bin/myapp remote:
+sbwtdio none
After making these changes, clear the Docker build cache from the environment page
and redeploy. The Clear Docker Cache button ensures the next build
runs with --no-cache so stale layers are not reused.
Shell shows "Unauthorized"
Shell access requires admin, repository owner, or organization owner/maintainer role. Regular collaborators (developer, reporter, guest) do not have shell access.
Shell shows environment is not active
The web shell only works when the environment status is active. If the environment is stopped, deploying, or expired, you need to deploy it first.
"Too many sessions"
There is a maximum of 5 concurrent shell sessions per environment. Close unused sessions from other browser tabs before opening a new one.
Orphaned Caddy routes
If you see orphaned routes in Admin → Infrastructure → View routes, these are routes that were registered in Caddy but no longer correspond to an active environment. This can happen if an environment was deleted while Caddy was unreachable. Use the Remove all orphaned button to clean them up, or delete individual routes.
If a route is intentionally managed outside of GitBlixt (e.g. a manually configured service), mark it as a system route to protect it from orphan cleanup.