Remote Building and Caching
Overview
With multiple NixOS machines (workstations, laptops, NAS, VPS), each rebuilds many of the same derivations independently. A kernel update on one machine means every other machine rebuilds the same kernel from source. The goal: build once, share everywhere.
Approaches Considered
Harmonia (current)
Harmonia serves the local Nix store as an HTTP binary cache. Any machine on the network can pull cached store paths from the Harmonia host.
Limitations:
- Only caches what was already built on the Harmonia host — it doesn't build anything itself
- Doesn't solve "who builds first" — if the NAS hasn't built a derivation yet, other machines can't pull it
- Single point of availability — the Harmonia host must be online and reachable
- Cache is ephemeral — tied to the host's local store, lost if the machine is rebuilt or garbage collected
Nix Remote Builders (nix.buildMachines)
Nix can offload builds to a remote machine via SSH. The local daemon sends the derivation to the builder, which compiles it and sends back the result.
Shortcomings:
- Root SSH access required — the Nix daemon runs as root and connects to the builder over SSH. This means either root-to-root SSH or a dedicated builder user added to
nix.settings.trusted-users - Passphrase-free SSH keys — the Nix daemon can't use
ssh-agent, so builder SSH keys must be unencrypted. This conflicts with the goal of requiring hardware key authentication for root access - Tight coupling — if the builder is offline, builds fail. The local machine can't fall back to building locally without manual intervention
- Results stay on the builder — built paths land in the builder's store. Other machines still need a separate binary cache to access them, which brings us back to square one
- Complex key management — each builder needs its own signing key pair, and every client must trust every builder's public key
Attic (chosen)
Attic is a multi-tenant Nix binary cache backed by S3-compatible object storage. Builders push to Attic after building locally; all other machines pull from it as a substituter.
Advantages:
- No machine-to-machine SSH — builders push over HTTPS using API tokens, not SSH keys
- Any machine can be a builder — workstation, laptop, CI runner. Anything that builds and runs
attic watch-storeautomatically contributes to the cache - Durable storage — S3 backend means the cache survives host rebuilds, garbage collection, and hardware failures
attic watch-store— runs as a systemd service, watches the local Nix store for new paths and pushes them in the background. Non-blocking, no post-build hooks needed- Managed signing — the Attic server signs NARs on retrieval. Individual builders never need signing keys, simplifying key management to a single server key
- Multi-tenant — supports multiple caches (e.g.,
nixos-config,project-specific) with independent access controls
Architecture
┌──────────────┐
│ Attic Server │
│ (atticd) │
│ │
│ PostgreSQL │
│ + S3 Store │
└──────┬───────┘
│ HTTPS
┌────────────┼────────────┐
│ │ │
┌─────┴─────┐ ┌───┴────┐ ┌─────┴─────┐
│Workstation│ │ Laptop │ │ VPS │
│ │ │ │ │ │
│ builds │ │ builds │ │ pulls │
│ + pushes │ │+ pushes│ │ only │
└───────────┘ └────────┘ └───────────┘
Builder machines: attic watch-store → pushes to Attic
All machines: nix.settings.substituters ← pulls from AtticAttic server placement options:
- NAS (self-hosted) —
atticdwith PostgreSQL, using local storage or S3 (e.g., MinIO) - Cloud-hosted —
atticdon a VPS with S3-compatible backend (Cloudflare R2, Tigris, AWS S3) - Hybrid — server anywhere, S3 storage elsewhere. The server is stateless aside from PostgreSQL metadata
Setup Guide
Server (Keystone module)
Enable the Attic server via keystone:
keystone.server.services.attic.enable = true;
# Token signing key (agenix secret)
age.secrets.attic-server-token-key = {
file = "${inputs.agenix-secrets}/secrets/attic-server-token-key.age";
};The attic-server-token-key secret must contain ATTIC_SERVER_TOKEN_RS256_SECRET_BASE64=<base64-key>.
Post-Deployment: Manual Cache Creation
After the first deploy, the Attic server runs but has no cache. You must create one manually:
-
Generate an admin token on the server host:
sudo atticd-atticadm make-token --sub "admin" --validity "10y" \ --push "*" --pull "*" --create-cache "*" --delete-cache "*" \ --configure-cache "*" --configure-cache-retention "*" -
Login and create the cache:
attic login cache https://cache.example.com <admin-token> attic cache create cache:main -
Retrieve the public signing key:
curl -s https://cache.example.com/main/nix-cache-infoThe
StoreDirandWantMassQueryfields confirm the cache works. The signing public key is shown in the output. -
Set the public key in your NixOS config so all machines trust the cache:
keystone.binaryCache = { enable = true; url = "https://cache.example.com"; publicKey = "main:AAAA...="; # from nix-cache-info }; -
Generate push tokens for builder machines. Store these as agenix secrets (
attic-push-token).
Client (all machines)
-
Login to the Attic server:
attic login server https://attic.example.com <token> -
Add the cache as a substituter:
attic use server:nixos-configThis modifies
~/.config/nix/nix.confto add the cache URL and public key. For NixOS system-level configuration:nix.settings = { substituters = [ "https://attic.example.com/nixos-config" ]; trusted-public-keys = [ "nixos-config:AAAA...=" ]; };
Designating a Builder
Any machine that runs attic watch-store automatically pushes everything it builds:
attic watch-store server:nixos-configAs a systemd service (recommended):
systemd.services.attic-watch-store = {
description = "Attic watch-store";
after = [ "network-online.target" ];
wants = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${pkgs.attic-client}/bin/attic watch-store server:nixos-config";
Restart = "on-failure";
RestartSec = 10;
# Credentials loaded from environment file or agenix
};
};Why watch-store over post-build-hook
Nix supports a post-build-hook that runs a script after each build. While this can push to a cache, it has drawbacks:
- Blocking — the hook runs synchronously, delaying the next build
- Fragile — if the push fails (network issue), the build is still marked as succeeded but the cache is incomplete
- Per-build overhead — each derivation triggers a separate push, no batching
attic watch-store monitors the store via inotify and pushes asynchronously in the background. Builds are never blocked, and transient failures are retried automatically.
Migration from Harmonia
Attic and Harmonia can coexist during transition:
- Add Attic as a substituter on all machines alongside the existing Harmonia URL
- Start
attic watch-storeon builder machines — new builds start populating Attic - Verify that machines pull from Attic successfully (
nix path-info --store https://attic.example.com/nixos-config /nix/store/...) - Remove Harmonia once the Attic cache is warm and all machines are configured:
- Disable
keystone.server.binaryCacheon the NAS - Remove
keystone.binaryCacheClientfrom all hosts - Remove the harmonia signing key from agenix secrets
- Disable
The existing keystone.binaryCacheClient module pattern (URL + public key) maps directly to how Attic clients are configured, so the migration is straightforward.
Future Work
keystone.server.attic— NixOS module for the Attic server (atticd+ PostgreSQL + S3 config + token management), following the pattern ofkeystone.server.binaryCachekeystone.attic— Client module for cache substituter config +attic watch-storesystemd service, replacingkeystone.binaryCacheClient- CI integration — GitHub Actions runners push to Attic, so CI-built paths are cached for all machines
- Cache garbage collection — Attic supports retention policies to manage S3 storage costs
- Per-project caches — Separate caches for different flakes (e.g.,
nixos-config,keystone, project-specific)