Dec 17, 2025Julian K. Arni

Hardware-Attested Nix Builds

More confidence in the integrity of your Nix artifacts.

We have recently run what might be the first hardware-attested Nix build. Hardware attestation provides cryptographic, independently verifiable evidence that a Nix build was executed as specified. It dramatically reduces the attack surface for integrity attacks on Nix builds, to the point that even full root access to e.g. garnix or cloud provider infrastructure is not alone enough to forge attestations.?

In other words, this is a big deal!

What this all means and how we did it is the subject of this blog post. In a following post, we'll also talk about how attestation enables us to have confidential Nix builds.

What is attestationshare

An attestation is a claim about the state of a software system (sometimes called the target) in the form of evidence for that claim. The process of collecting evidence through observation is called a measurement, and usually this takes the form of hashes of some system state. The process of verifying or assessing the evidence is called verification.

A couple of examples will make this more concrete. GitHub (as well as other CIs) allows workflows access to an OIDC token that identifies the bearer as a particular CI run, for a particular commit in a particular repo. The measurement is the hashing of workflow files, looking up of commits, etc. An external service (e.g. Sigstore) can then generate a signing certificate for any bearer of such a token, and the signing certificate will contain the identity (run/commit/repo) signed by the service. The CI run can now sign a statement saying "this build artifact was produced by such-and-such run/commit/repo", which is the attestation. If you trust that GitHub and the external service did their job correctly and haven't been hacked, and that the CI runner wasn't hacked, you have very good reason to believe that statement. The signature is evidence that the services agree with that statement.

There's a lot this attestation doesn't specify: What is the hypervisor, what other software is running in the host, what version of the kernel is running in the guest, etc. And it may be that other things, such as exactly what versions dependencies were downloaded during builds, is not captured in e.g. lockfiles. And all of this is relevant. It would be good to have that information also attested. But even better is if that information were not relevant because it cannot influence the artifact.

Another even more pertinent example is the signatures associated with Nix store paths. The assertion in this case is that a particular output was the result of building a particular input (derivation), and the measurement is the hash of the contents of the output. Whether you believe that assertion may well depend on who is making it (i.e. who signed it). Nix takes care of locking all resolved dependencies, but all the other facts about the build are still relevant, and still missing.

The conclusion we want to be able to draw from the evidence we have in attestations is that our built artifact is correct. What "correct" means, comes down to the semantics of your programming language. If your interpreter/compiler implement that semantics correctly, and the OS implements its semantics correctly, and nothing else gets in the way, it will be correct. But a lot can get in the way — that's how the attacks get in.

Hardware attestationshare

Hardware that supports attestation helps limit those things, and to then make dependable claims about everything else. In particular, the hardware supports creating isolated virtualized environments that aren't as easily influenced by their host (these are called Trusted Execution Environments, or TEE).? If you create a tiny microvm with barely anything besides a kernel and your program, and it can't be influenced by the host, and you can attest that reliably, without the stuff backing the attestation introducing even more things to be attested, attestation becomes very convincing evidence of the correctness and integrity of your artifacts.

The way hardware manufacturers achieve this is by enabling software to initialize hardware-backed isolation of memory (and of CPUs and registers), combined with hardware-backed measurements of the contents of that memory, which are then signed by a key with hardware-backed assurances that it will not be leaked or used inappropriately. Moreover, even tampering with the system physically is made very difficult. The host process can then load a program into memory, and request that the hardware initialize this isolation and measurement process and begin executing the instruction loaded into memory.

When the program executes, it can then request that data about the memory measurements be signed together with arbitrary data. A verifier can check that the memory measurements correspond to the correct program either by compiling the program again (if the build is reproducible) or by having been the one to compile it in the first place. This report is thus an attestation that that program created that data. The semantics of the data is clear from the program itself; for example, if a program that builds an artifact and then requests that the hash of that artifact be attested, the attestation shows that that artifact was built by the program.?

Attestations don't need to be about artifacts (unless by "artifact" we mean something very general like "anything a program produces", including HTTP requests, stdout lines, etc.). An important class of attestations are of this kind. Instead, attestation is used to prove that a system you are communicating with is running some specific software. A big driver for developing hardware attestation was so cloud providers could convincingly prove to their clients that they were running the software they had been asked to.?. This type of attestation is sometimes called remote attestation and distinguished from artifact-based software attestation. (The terms are terrible.)

Nixshare

Nix already does an excellent job at capturing the explicit build inputs of an artifact, and limiting network access. Hardware attestation can make this much more dependable by limiting the influence of everything else.

One way we could do hardware-attested Nix builds it to simply start a remote builder in a hardware-attested VM, and then use the hash of the tuple (input hash, output hash) as the extra data given to the hardware attestation. This is better than existing remote builders, and is quite easy to implement (it's the first thing we did). But there is still a lot that can go wrong: one build might escape its sandbox, and then influence the other. Or the Nix daemon itself can be exploited.

Much better is to replace the sandbox that Nix builds run in with a hardware-attested, very minimal VM. In the sandbox, hardly anything is needed besides the builder script of the derivation itself, and its dependencies. In fact, you don't need Nix, and, for the most part, you don't even need network access (the exception are fixed-output derivations). We can then attest the tuple (input derivation, Merkle-tree hash of dependencies, output hash) together with the VM boot measurements. (Note that we don't in the VM verify the attestation reports of the dependencies, leaving that to the end user, since that allows for varying trust policies — important if, for example, some system is compromised.)

This is an incredibly powerful thing. If hardware-attestation lived to its full promise, the attestation report would be completely trustworthy proof that the right thing was built given the VM (e.g., the Linux kernel, bootloader and firmware versions) and trust in hardware manufacturers manufacturing process and key management.? Anyone could accept this proof and be safe. (We also have a simple patch to Nix to make signature verification of substitution more configurable, so that Nix itself could start accepting such evidence, and requiring hardware-attested builds in your server or machine be made trivial. Upvote the issue!)

Of course, that's too good to be true. There have been attacks on such hardware, and some are still exploitable. In fact, it is generally a bad idea to think of hardware protections as absolute, and to instead see them as deterrents much in the same way that you would likely think of a bank vault — and, like a bank vault, to consider whether the cost of the attack is lower than the benefit. Significantly, the extant attacks rely of having physical access to the machine. This is a dramatically different threat model to software attacks — it's much harder in most cases to gain physical access to a machine, and much more likely to result in being caught. If you are using a cloud provider and your worry is nation-state threat actors, or deliberately the cloud provider itself, however, you should certainly be worried about this. You can, of course, combine hardware attestation with on-prem servers (and a couple of cameras to watch over them). Despite the fact that remote attestation is usually described as securing yourself against your cloud providers, I think it makes a lot of sense with servers you manage yourself.

Because hardware attacks are possible, one should not trust any build with the correct measurements, signed by hardware keys from Intel or AMD. Having physical access to a computer can, with certain attacks, allow one to sign spurious measurements, so an attacker would only need physical access to one such computer. Instead, you should only accept signatures from machines in physical control of entities you trust.

Because attestation reports contain information about so much that can influence a build, they also make it easy to assess the impacts of compromises and to effectively recover from them. If a particular version of Linux or the firmware is the problem, or if a particular machine was physically accessed, you know exactly what builds they were responsible for.

Conclusionshare

There's of course a lot about attestation that we didn't cover or simplified substantially. For hardware, details vary considerably between manufacturers, and between generations; it's useful to pick one manufacturer and learn more about it.

Hardware attestation can be combined with independent rebuilds to increase confidence even more in the result. In particular, rebuilding every derivation twice, in different cloud providers (or once on-prem), and on AMD and Intel, is probably a very good idea, reducing the impact of both manufacturer-specific attacks and of cloud provider compromises.

One thing I didn't talk about is how evidence can be chained to support broader conclusions. For example, one could run an attested Nix evaluator that gives you as output only the drv file corresponding to a particular Nix file (or output of the flake file). This can be combined with attestations about builds to attest file-to-build-output mappings. These combinations can become complex; frameworks such as in-toto can help.

In the next blog post, we'll talk about how we can have a similarly high level of assurance that no program besides the intended one can read the inputs or outputs of a build — how we can have, in short, confidential builds.

There are some issues with per-build sandboxes that we still have to work out (for instance, that they take a long time to start up), and setting this system up still requires a lot of involvement on our part and consideration of the specific use. Additionally, it's more expensive and slower than non-attested builds. So we don't expect to enable these for all customers in the near future. But if you think your company might benefit from this, do reach out. A big benefit of this approach is that using the attestations once someone has set up the build pipeline is remarkably simple.

Other Posts:

Nov 25, 2025Julian K. Arni

Running Nix apps outside a sandbox

Oct 30, 2025Alex David, Sönke Hahn, and Julian K. Arni

A supply-chain attack on Nix, and our approach to solving it.

Oct 29, 2025Alex David and Julian K. Arni

Solving the issues with remote building in Nix

View Archive
black globe

Say hi, ask questions, give feedback