Hardware-Attested Nix Builds
34 points by jkarni
34 points by jkarni
So cool! I love to see remote attestation used to make 3rd parties more trustworthy instead of the usual use of making my computer less trustworthy.
Can you really do that? Honest question because I feel I really must be missing something.
For sure, I can use attestation to make a device I manufacture harder to impersonate/tamper with by others.
Also, GitHub attestation sounds realistic and useful; they can guarantee that I cannot lie when saying a specific binary was produced by a specific set of inputs. (And reproducible builds help with that.)
But how I would verify that an attested build is legit without trust in some third party or build reproducibility verification?
I see many useful forms of attestation, but I'm really seeing how the article provides any benefit for most end users?
You do have to trust a party - the hardware manufacturer. You also have to trust the kernel, and one or two other bits of software.
These things you had to trust in any scenario. But you largely get rid of having to also trust GitHub, and Azure, and the implementors of a ton of software and entitiers (Nix, the hypervisor, ssh, sudo, everything running on the host, likely systemd, in the Nix case also whoever is storing the signing keys, probably also your coworkers and a lot of things running on your development machine, and who knows what else). The difference is huge.
EDIT: because of the attacks mentioned, you do still have to trust Azure or whoever owns the servers running your build to not physically tamper with them nor to allow another party to tamper with them. Also, all these claims are just about the integrity of the build itself - obviously a compromise in most of these could affect you a variety of other ways.
Don't I have to trust you too?
And how would this attestation help against a compromised sudo, for example?
No, you don’t have to trust us (unless new attacks surface that change things). The hardware runs a VM and in essence signs with its own protected key the hash of the VM and the extra data the VM chooses. So I tell you “hey, this is the source code of the VM I ran”, you can check and review it, and then build the VM yourself. You should get the same hash that was signed, which is evidence that that really was that program/VM that requested that that data be signed (the data in our Nix case is basically the statement “Building derivation X produces an artifact with hash Y”).
Importantly, the hardware protects the VM from interference even from the host. So we can’t mess with the program or forge the signature, even though we are the ones starting the VM (this is quite a different model than traditional Linux where root can more or less always do everything).
This also explains why vulnerabilities in sudo etc are not an issue. Programs on the host can’t influence the guest without also changing the measurement/evidence.
(I had meant sudo on the host. On the guest vulnerabilities in software you explicitly request aren’t artifact integrity vulnerabilities - if my build script installs a malicious program that then in turn adds a backdoor to the final artifact, the only correct thing to do from an integrity perspective is build exactly that. That’s what the user requested.)
But why do you need the "protected key" if verification entails rebuilding? And how do you know you're using hardware attestation?
The rebuilding is only of the VM. You only need that once - every build of every derivation can then use that VM. It’s essentially to verify the runner environment, not the individual builds. It is a bit annoying that you need that, but it’s much easier to verify a single thing once. (It is also possible, I suppose, to bootstrap from a very minimal program short enough to have its hex manually checked; the Bootstrappable Builds project has done this, I think. This path avails itself of the fact that I simplified when I said hardware attestations are a about a VM - they’re actually about any sandboxed process.)
You know you are using hardware attestation because you get a signed report, and it’s signed by a key the hardware manufacturer themselves attest has been put into a tamper-proof chip which only signs memory fingerprints it actually sees in memory address spaces that are themselves hardware protected. Basically AMD or Intel sign a statement saying “this is one of our hardware-protected keys”.
Ahhh! It's the latter part what I was missing!
Maybe you could edit the article that Intel/AMD publish public keys that allow verification. On hindsight it's relatively obvious to me, but I think it's an important detail.
If I understand correctly, GitHub can prove to you that as long as you trust they aren’t messing with the hardware, that they haven’t done the much easier job of swapping out the software. Assuming GitHub was hacked this would be possible for them to do accidentally as well so you don’t need as much trust in their security.
This is awesome. Any chance e.g. specific implementation details will be published? Would love to experiment with this on my own.
Yeah - we're still trying different ways of making this more efficient, but as soon as things stabilize I'm happy to write another blog post with details. If you use cloud providers' attestation support it's relatively easy, but that doesn't allow for the much better "VM for only the builder and nothing else" approach unless you either make images and spin servers only after you know what you want to build (very slow) or copy the closure into machines you preprovision in a pool (still somewhat slow, and you need to enable at least some networking). Having full control of the host in dedicated hardware should be much better on most counts, but so far we found that when things work and when they don't seems to a be a function of a really fragile mix of hardware, kernel, BIOS settings, firmware, and specifics of your image.