Storage backends
Keystone supports two storage backends for root filesystems: ZFS and ext4. The choice between them determines the available features, the encryption architecture, and whether the system can support hibernation. This page explains the trade-offs, the device role paradigm that guides the decision, and the configuration options for each backend.
Device role paradigm
Keystone distinguishes between two primary device roles, and the storage backend follows from the role.
Workstations and servers benefit from ZFS. These machines typically remain powered on or are rebooted cleanly. ZFS provides checksummed data integrity, transparent compression, automatic snapshots, send/receive replication, and multi-disk redundancy (mirror, RAIDZ1/2/3). ZFS is the default storage backend in Keystone.
Laptops may prefer ext4 when hibernation is required. ZFS does not safely support hibernation because pool state cached in memory becomes stale if the pool is modified between hibernate and resume. Since laptops often need to hibernate to conserve battery, the ext4 backend provides a simpler path with full hibernation support.
ZFS overview
ZFS is a copy-on-write filesystem that integrates volume management and filesystem operations into a single stack. Keystone uses it as the default backend because of its data integrity guarantees.
Copy-on-write and data integrity
Traditional filesystems modify data in place. If power fails during a write, data can be corrupted. ZFS takes a different approach: new data is written to a fresh location, and the pointer is updated atomically. The write either completes fully or not at all.
Every block in a ZFS pool is checksummed. On every read, ZFS validates the checksum and detects silent data corruption ("bit rot") that traditional filesystems would miss. With redundant configurations (mirror or RAIDZ), ZFS can automatically repair corrupted blocks from good copies.
Pools and datasets
ZFS organizes storage into pools (zpools) and datasets. A pool aggregates one or more physical devices, and datasets are filesystems within that pool. Keystone always names its pool rpool.
The Keystone ZFS layout creates the following dataset hierarchy:
| Dataset | Purpose |
| ------------------------ | -------------------------------------------------------- |
| rpool/credstore | LUKS-encrypted ZFS volume (zvol) storing encryption keys |
| rpool/crypt | Encrypted parent dataset (AES-256-GCM) |
| rpool/crypt/system | Root filesystem (/) |
| rpool/crypt/system/nix | Nix store (/nix), auto-snapshot disabled |
| rpool/crypt/system/var | Variable data (/var) |
When users are configured with ZFS home directories, additional rpool/crypt/home/<username> datasets are created with per-user quotas and compression settings.
Multi-disk configurations
ZFS supports several redundancy modes through the keystone.os.storage.mode option:
| Mode | Minimum disks | Redundancy | Equivalent |
| -------- | ------------- | --------------------------- | -------------------- |
| single | 1 | None | Single disk |
| mirror | 2 | All disks mirror each other | RAID1 |
| stripe | 2 | None (data striped) | RAID0 |
| raidz1 | 3 | Single parity | RAID5 |
| raidz2 | 4 | Double parity | RAID6 |
| raidz3 | 5 | Triple parity | No common equivalent |
Example configuration for a mirrored workstation:
keystone.os.storage = {
type = "zfs";
devices = [
"/dev/disk/by-id/nvme-disk1"
"/dev/disk/by-id/nvme-disk2"
];
mode = "mirror";
};Compression
ZFS provides transparent compression at the dataset level. Keystone defaults to zstd, which offers a strong balance of compression ratio and speed. The keystone.os.storage.zfs.compression option accepts off, lz4, zstd, gzip, gzip-1, and gzip-9.
LZ4 is the fastest option and often improves overall performance by reducing the amount of data written to disk. Zstd provides better compression ratios with slightly higher CPU usage.
Automatic snapshots and scrubs
When enabled (the default), Keystone configures automatic ZFS snapshots with the following retention policy:
- 8 frequent snapshots (every 15 minutes)
- 24 hourly snapshots
- 7 daily snapshots
- 4 weekly snapshots
- 12 monthly snapshots
Weekly scrubs verify the integrity of all data in the pool. Scrubs read every block and validate its checksum, detecting and reporting any corruption. With redundant configurations, ZFS repairs corruption automatically during scrubs.
These features are controlled by keystone.os.storage.zfs.autoSnapshot and keystone.os.storage.zfs.autoScrub, both defaulting to true.
ARC memory usage
ZFS uses an Adaptive Replacement Cache (ARC) in RAM for read caching. The keystone.os.storage.zfs.arcMax option limits the maximum ARC size. When set to null (the default), Keystone uses 4 GB. Systems with large pools or heavy read workloads may benefit from a larger ARC.
keystone.os.storage.zfs = {
arcMax = "8G"; # Allow up to 8 GB for ARC cache
};As a general guideline, plan for 1 GB of ARC base overhead plus 1 GB per TB of storage.
Hibernation incompatibility
ZFS does not support hibernation. When a system hibernates, pool state is preserved in the RAM image written to swap. If the pool were modified between hibernation and resume (for example, by importing the pool on another system), the in-memory state would be inconsistent with the on-disk state, risking data corruption. ZFS explicitly refuses to import pools that may be in an inconsistent state.
Keystone enforces this constraint at the configuration level:
Hibernation requires ext4 storage backend. ZFS cannot support hibernation
because dirty writes after freeze corrupt pools.The keystone.os.storage.hibernate.enable option is available only with the ext4 backend. For ZFS systems, boot.zfs.allowHibernation is set to false.
ext4 overview
The ext4 backend provides a simpler storage configuration for systems that do not require ZFS features. It uses LUKS2 encryption directly on the root partition without the credstore intermediary.
When to use ext4
The ext4 backend is appropriate in the following scenarios:
- Laptops requiring hibernation. ext4 with a persistent LUKS-encrypted swap partition supports full suspend-to-disk.
- Simple single-disk systems. When snapshots, checksums, and compression are not needed, ext4 reduces complexity.
- Resource-constrained systems. ext4 has lower memory overhead than ZFS, which requires RAM for its ARC cache.
Partition layout
The ext4 backend creates a simpler partition layout:
| Partition | Type | Purpose |
| --------- | --------------------- | ------------------------------ |
| ESP | FAT32 | EFI System Partition (/boot) |
| Root | LUKS2 + ext4 | Root filesystem (/) |
| Swap | Swap (random or LUKS) | Swap space |
When hibernation is disabled, swap uses random encryption (a new key each boot, so swap contents do not persist across reboots). When hibernation is enabled, swap uses a persistent LUKS volume so that the hibernation image survives reboot.
Hibernation support
To enable hibernation with the ext4 backend:
keystone.os.storage = {
type = "ext4";
devices = [ "/dev/disk/by-id/nvme-laptop-ssd" ];
swap.size = "32G"; # Must be at least the size of RAM
hibernate.enable = true;
};This configuration creates a persistent LUKS-encrypted swap partition and sets boot.resumeDevice to /dev/mapper/cryptswap. The resume kernel module is loaded in the initrd to support restoring from the hibernation image.
Limitations
The ext4 backend does not support:
- Multi-disk configurations (only
singlemode is allowed) - Filesystem-level snapshots or rollback
- Data checksumming or self-healing
- Transparent compression
- Per-user home directory quotas via filesystem-level controls
Configuration reference
Common options
These options apply to both storage backends:
keystone.os.storage = {
enable = true; # Enable storage management (default: true)
type = "zfs"; # "zfs" or "ext4" (default: "zfs")
devices = [ "..." ]; # Disk devices (by-id paths recommended)
esp.size = "1G"; # EFI System Partition size (default: "1G")
swap.size = "8G"; # Swap size, "0" to disable (default: "8G")
};ZFS-specific options
keystone.os.storage = {
type = "zfs";
mode = "single"; # single, mirror, stripe, raidz1/2/3
credstore.size = "100M"; # Credstore volume size (default: "100M")
zfs = {
compression = "zstd"; # off, lz4, zstd, gzip, gzip-1, gzip-9
atime = "off"; # "on" or "off" (default: "off")
arcMax = null; # Maximum ARC cache size, e.g. "4G"
autoSnapshot = true; # Enable automatic snapshots
autoScrub = true; # Enable weekly integrity scrubs
};
};ext4-specific options
keystone.os.storage = {
type = "ext4";
hibernate.enable = false; # Enable hibernation (default: false)
};ZFS kernel compatibility
ZFS is an out-of-tree kernel module that must be compiled for each Linux kernel version. Because the Linux kernel provides no stable internal ABI, ZFS developers must adapt their code whenever kernel internals change. This creates a lag between new kernel releases and ZFS support for those kernels.
NixOS handles this by marking ZFS kernel modules as "broken" when the kernel version falls outside the supported range. If a configuration references an incompatible kernel, the build fails at evaluation time rather than at boot.
For systems that require newer kernels (for example, for recent GPU support), verify that the target kernel version is within the supported range of the OpenZFS version in nixpkgs. The linuxPackages_latest kernel package is usually ZFS-compatible, but this should be confirmed before upgrading.
See also
- Disk encryption for details on how encryption differs between the two backends
- Architecture for the device role paradigm and module system overview
- Getting started for selecting a storage backend during initial deployment