← Blog

Rebuilding the Asterisk Docker Matrix

The project sat for a while. The matrix of Asterisk Docker images on andrius/asterisk had not been touched in years, even as new Asterisk releases kept landing. Over the past few months I rebuilt it from the ground up, kept the legacy coverage, and put it on a footing where adding a new release is one workflow run.

Origins

The repository started as the smallest Asterisk Docker image I could build — a single Dockerfile, a single version, optimized for size. It later grew into a matrix of versions used for compatibility testing across the major branches. Then the project went quiet, while Asterisk 21, 22, and 23 each shipped, chan_sip was removed, chan_websocket was added, and the Debian base distributions moved on to Bookworm and Trixie.

What I Rebuilt

Template System

The Dockerfiles are no longer hand-edited. Configuration lives in three template layers: a base layer with the common build and runtime packages, a distribution layer that pins the OS-specific package versions (libicu76 on Trixie, libicu72 on Bookworm, and so on back to Jessie), and a variant layer that captures version-specific differences (legacy-addons for 1.2 to 1.6, asterisk10 for 1.8 to 11, modern for 12 and later). A Jinja2 generator merges these into a YAML config and emits the Dockerfile.

Version-specific module requirements are applied automatically during merge. Asterisk 21 and later removes chan_sip, since it was deleted from upstream and only chan_pjsip remains. Asterisk 23 and later includes chan_websocket with the full WebSocket transport stack. The overrides live in lib/template_generator.py and run on every config regeneration, so the templates stay clean and the version rules stay enforced.

Version Coverage

There are more than forty build directories under asterisk/, covering every supported version from 1.2.40 through 23.2.0, plus the certified releases (11.6-cert18 through 20.7-cert8) and a rolling git build from upstream master. The legacy versions are kept because there are still production deployments pinned to those branches, and the images are useful for regression checks against integrations that have to talk to old PBX installations.

CI/CD

A discovery workflow runs daily at 8:00 PM UTC, scrapes the official Asterisk release pages, and opens a PR when a new patch release appears. Builds are batched across the working week, with Monday, Tuesday, Wednesday, and Thursday each taking a slice of the matrix, to keep any single run small enough to debug. Multi-arch images are assembled by pushing per-arch builds to GHCR first and using those as the digest source for the manifest, then mirroring the result to Docker Hub. Successful builds announce themselves on Telegram (@asterisk_docker) and Mastodon.

Keeping It Supported

The substrate is the point. Daily discovery picks up new Asterisk versions automatically, the batch workflows rebuild the matrix on a rolling weekly schedule, and the announcement channels surface each successful build without me having to remember. I plan to keep this running as a foundation — adding the next major Asterisk release should be a template tweak and a YAML entry, not a rewrite.

The repository is at github.com/andrius/asterisk. Images are on Docker Hub at andrius/asterisk and on GHCR at ghcr.io/andrius/asterisk. Release announcements go to t.me/asterisk_docker.