Keystone SystemsKS Systems

Architecture

Keystone is a modular NixOS infrastructure platform that provides declarative, reproducible system configurations with hardware-backed security. This page describes the module system, the device role paradigm that guides configuration decisions, and the deployment patterns that compose modules for different use cases.

Module system

Keystone is organized around a set of NixOS and home-manager modules, each responsible for a distinct layer of the system. Modules are imported individually and composed together in a flake to produce a complete system configuration. The operating-system module is the foundation; all other modules are optional and build on top of it.

Operating system module

The operating-system module (keystone.nixosModules.operating-system) is the core of every Keystone deployment. It provides the keystone.os.* options namespace and handles everything below the user session layer:

| Option namespace | Responsibility | | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ | | keystone.os.storage | Disk partitioning via disko, ZFS or ext4 with LUKS encryption, multi-disk modes (mirror, raidz1/2/3), credstore pattern for key management | | keystone.os.secureBoot | Lanzaboote Secure Boot with custom key enrollment | | keystone.os.tpm | TPM2-based automatic disk unlock with PCR binding | | keystone.os.remoteUnlock | SSH server in initrd for remote disk unlocking on headless machines | | keystone.os.users | User account creation with ZFS home directories, per-user module enablement, and SSH key management | | keystone.os.services | Avahi/mDNS, firewall, systemd-resolved | | keystone.os.nix | Flakes enablement, garbage collection settings | | keystone.os.ssh | OpenSSH server configuration |

This module bundles disko and lanzaboote as dependencies. There is no need to import them separately.

The OS module source is organized as follows:

modules/os/
  default.nix           # Main orchestrator with keystone.os.* option declarations
  storage.nix           # ZFS/ext4 + LUKS credstore disko configuration
  secure-boot.nix       # Lanzaboote integration
  tpm.nix               # TPM enrollment commands and systemd units
  remote-unlock.nix     # Initrd SSH for remote disk unlocking
  users.nix             # User management with ZFS home directories
  ssh.nix               # SSH server configuration
  scripts/              # Enrollment and provisioning shell scripts

Desktop module

The desktop module is split across two entry points:

  • NixOS module (keystone.nixosModules.desktop) -- Configures system-level desktop infrastructure: the Hyprland compositor with UWSM (Universal Wayland Session Manager), PipeWire audio with ALSA/PulseAudio/JACK compatibility, greetd login manager with tuigreet, NetworkManager with Bluetooth support, and screen recording utilities.
  • Home-manager module (keystone.homeModules.desktop) -- Configures per-user Hyprland settings including keybindings, modifier key mapping, monitor management, Waybar, Mako notifications, and night light.

Desktop functionality is enabled per-user via the keystone.os.users.<name>.desktop.enable option, which allows different users on the same machine to have different desktop configurations. For full details on desktop configuration, see the desktop environment documentation.

Terminal module

The terminal module (keystone.homeModules.terminal) is a home-manager module that provides a complete terminal development environment:

  • Helix text editor with language server support
  • Zsh shell with starship prompt
  • Zellij terminal multiplexer with session persistence
  • Ghostty terminal emulator
  • Git configuration with user credentials

Terminal functionality is enabled per-user via keystone.os.users.<name>.terminal.enable. Because it is a home-manager module, it can also be used standalone outside of a full Keystone NixOS deployment. For full details, see the terminal development environment documentation.

Server module

The server module (modules/server/) provides optional infrastructure services for always-on machines:

  • VPN -- Headscale/Tailscale VPN server
  • Monitoring -- Prometheus and Grafana stack
  • Mail -- Mail server (placeholder for future implementation)
  • Observability -- Loki and Alloy log aggregation

Server services are enabled selectively through keystone.server.* options. This module is not required for workstation or laptop deployments.

ISO installer module

The isoInstaller module (keystone.nixosModules.isoInstaller) produces a bootable NixOS installer image preconfigured with SSH access, essential tools (git, parted, ZFS utilities), and DHCP networking. The resulting ISO is used with nixos-anywhere to deploy Keystone configurations to bare-metal or virtual machines. For details on building and using the installer ISO, see the getting started guide.

Device role paradigm

Keystone configurations fall into two broad device roles. The device role determines which modules to import and influences storage, security, and networking decisions.

Servers

Servers are always-on machines that run headless services such as VPN endpoints, DNS, monitoring, storage, and backups. Typical server hardware includes Raspberry Pis, Intel NUCs, rack-mounted systems, and virtual private servers.

A server deployment uses the operating-system module and optionally the server module. It does not include the desktop module. Remote unlock via SSH in initrd is often enabled so that the machine can be rebooted without physical console access. ZFS is the recommended storage backend for servers because it provides checksumming, snapshots, and compression. For more on storage selection, see the storage backends documentation.

Clients

Clients are interactive workstations and laptops with a graphical desktop environment. They use the operating-system module together with the desktop module for Hyprland, and typically enable the terminal module for each interactive user.

Client deployments differ from servers in several respects:

  • Storage backend -- Laptops that require hibernation support may use ext4 with LUKS instead of ZFS, because ZFS does not support hibernation. Workstations that do not need hibernation benefit from ZFS snapshots and compression. See the storage backends documentation for a detailed comparison.
  • Desktop enablement -- Users on client machines set desktop.enable = true and configure modifier keys, monitor layouts, and other Hyprland options per-user.
  • Networking -- Client machines typically use NetworkManager with Bluetooth support, which the desktop module enables automatically.

Both device roles share the same security model: TPM2-based automatic disk unlock, LUKS full-disk encryption, and Secure Boot with custom key enrollment. For details on the encryption layer, see the disk encryption documentation.

Deployment patterns

The following patterns illustrate how modules compose to serve different use cases. Each pattern is a complete flake.nix configuration showing which modules to import and which options to set.

Pattern 1: Headless server

A headless server uses only the operating-system module with remote unlock enabled for unattended reboots. No desktop or terminal modules are needed at the NixOS system level, though terminal tools can be enabled per-user for SSH sessions.

{
  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.05";
    keystone.url = "github:ncrmro/keystone";
    home-manager.url = "github:nix-community/home-manager/release-25.05";
  };

  outputs = { nixpkgs, keystone, home-manager, ... }: {
    nixosConfigurations.myserver = nixpkgs.lib.nixosSystem {
      system = "x86_64-linux";
      modules = [
        home-manager.nixosModules.home-manager
        keystone.nixosModules.operating-system
        {
          networking.hostId = "deadbeef";

          keystone.os = {
            enable = true;
            storage = {
              type = "zfs";
              devices = [ "/dev/disk/by-id/nvme-Samsung_SSD_980_PRO_2TB" ];
              swap.size = "16G";
            };
            remoteUnlock = {
              enable = true;
              authorizedKeys = [ "ssh-ed25519 AAAAC3... admin@workstation" ];
            };
            users.admin = {
              fullName = "Server Admin";
              email = "admin@example.com";
              extraGroups = [ "wheel" ];
              authorizedKeys = [ "ssh-ed25519 AAAAC3... admin@workstation" ];
              hashedPassword = "$6$...";
              terminal.enable = true;
            };
          };
        }
      ];
    };
  };
}

Pattern 2: Workstation with desktop

A workstation imports both the operating-system and desktop NixOS modules, plus the terminal and desktop home-manager modules. Each user opts into the desktop and terminal environments individually.

{
  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.05";
    keystone.url = "github:ncrmro/keystone";
    home-manager.url = "github:nix-community/home-manager/release-25.05";
  };

  outputs = { nixpkgs, keystone, home-manager, ... }: {
    nixosConfigurations.workstation = nixpkgs.lib.nixosSystem {
      system = "x86_64-linux";
      modules = [
        home-manager.nixosModules.home-manager
        keystone.nixosModules.operating-system
        keystone.nixosModules.desktop
        {
          home-manager = {
            useGlobalPkgs = true;
            useUserPackages = true;
            sharedModules = [
              keystone.homeModules.terminal
              keystone.homeModules.desktop
            ];
          };

          networking.hostId = "deadbeef";

          keystone.os = {
            enable = true;
            storage = {
              type = "zfs";
              devices = [ "/dev/disk/by-id/nvme-WD_BLACK_SN850X_2TB" ];
              swap.size = "32G";
            };
            users.alice = {
              fullName = "Alice Smith";
              email = "alice@example.com";
              extraGroups = [ "wheel" "networkmanager" ];
              initialPassword = "changeme";
              terminal.enable = true;
              desktop = {
                enable = true;
                hyprland.modifierKey = "SUPER";
              };
              zfs.quota = "500G";
            };
          };
        }
      ];
    };
  };
}

Pattern 3: Multi-service server

A multi-service server extends the headless server pattern with the server module to run infrastructure services such as VPN and monitoring alongside the base operating system.

keystone.os = {
  enable = true;
  storage = {
    type = "zfs";
    devices = [
      "/dev/disk/by-id/nvme-disk1"
      "/dev/disk/by-id/nvme-disk2"
    ];
    mode = "mirror";
  };
  users.admin = {
    fullName = "Server Admin";
    email = "admin@example.com";
    extraGroups = [ "wheel" ];
    hashedPassword = "$6$...";
    authorizedKeys = [ "ssh-ed25519 AAAAC3... admin@workstation" ];
    terminal.enable = true;
  };
};

keystone.server = {
  enable = true;
  vpn.enable = true;
  monitoring.enable = true;
};

Pattern 4: Thin client for remote development

A laptop can serve as a thin client that connects to a remote workstation over a mesh VPN. The laptop runs the terminal module locally and uses Mosh, SSH port forwarding, and Zellij session resumption to develop on the remote machine. This pattern decouples the development environment from the local hardware. For details, see the remote development documentation.

Security model

All Keystone deployment patterns share a layered security model:

  1. LUKS full-disk encryption -- Every storage device is encrypted with LUKS2. On ZFS configurations, a credstore partition holds ZFS encryption keys inside a LUKS volume, creating a chain where LUKS protects ZFS and ZFS protects user data.
  2. TPM2 automatic unlock -- The TPM chip stores LUKS keys bound to specific PCR measurements, enabling automatic unlock when the boot chain has not been tampered with. If the TPM measurement fails, the system falls back to a passphrase prompt.
  3. Secure Boot -- Lanzaboote signs boot components with custom keys enrolled in the UEFI firmware. This ensures that only trusted code executes before the operating system starts.
  4. SystemD initrd orchestration -- The boot sequence follows a strict dependency chain: pool import, credstore unlock, key loading, and filesystem mounting. TPM2 PCR measurements verify boot state at each step.

For detailed configuration of the encryption and boot security layers, see the disk encryption and Secure Boot documentation.

See also