Cryptographic Issues in Matrix’s Rust Library Vodozemac
32 points by dzwdz
32 points by dzwdz
Interesting finds, but it would have been an easier read if he wasn't going out of his way to trash talk the Matrix team every other sentence (deserved or not).
edit: the vulnerability also seems much less severe than he's making it out to be.
This seems to be a pattern with this person - both being deliberately inflamatory and overstating the impact of his findings
Minor part of the article, but:
- #[cfg(fuzzing)] Bypasses MAC and Signature Verification
If you ever accidentally compile vodozemac with the fuzzing Cargo feature flag enabled, you’ve just disabled all security in your client.
is wrong. #[cfg(fuzzing)] is not conditional on enabling feature (which would be #[cfg(feature = "fuzzing")], it's conditional in running the code with cargo fuzz. Which is fine.
The Matrix team could have [...] used a Cargo feature flag in Cargo.toml instead of what they did here, but alas.
using a Cargo feature flag would introduce the vulnerability the article claims already exists.
Meanwhile I am quite interested in these type of posts, the constant shittalking is rather tiresome. Their blog, they can do as they wish of course, but as a casual observer this seems just excessive.
Having been a target of this kind of shittalking many times is probably why I feel this way.
The disclosure timeline in this post seems incredibly irresponsible. A seven day timeline is incredibly short - especially for a distributed ecosystem, but then deliberately publishing before the end of the original timeline while giving additional information that was not made available prior is jaw-dropping. Combined with the tone throughout the post ("Matrix permanently lost the privilege of having a 90 day courtesy window. I gave them a week.") and it's dropped my already-low opinion of Soatok through the floor - regardless of the actual contents of the vulnerabilities.
but then deliberately publishing before the end of the original timeline while giving additional information that was not made available prior is jaw-dropping.
What additional information? He furnished the Matrix.org team a week prior with the same vulnerabilities he published tonight. The additional things he complained about are, very clearly, labeled as "not vulnerabilities".
If you think he's mistaken and any of these are real vulnerabilities, I'm curious which.
He also moved the timeline up because Matrix insisted they have no security impact for Matrix. If there's no impact, why delay disclosure?
Quoting from his disclosure timeline:
2026-02-17: I respond to Matrix.org with additional PoCs, a patch, and express disagreement with their reasoning
2026-02-17: Public disclosure
PoCs are usually considered important parts of a vulnerabiity report.
But the PoCs were related to the bug that was already disclosed.
Like, if you're not sure, ask for more information. But definitively stating "there's no practical security impact" waives any complaint about immediate publication.
The PoCs provided in the post were explicitly for the first and second vulnerabilities, which had not been disclosed prior - so I'm not sure what you're on about. In any case, I don't think this conversation (and you editing your message after I responded) is leading in a good direction
I don't think this conversation (and you editing your message after I responded) is leading in a good direction
Sorry, I didn't see your reply before I edited my reply. We're experiencing race conditions here, not subterfuge.
EDIT: Soatok tells me he submitted PoCs with his initial email. You can see the revision history on the gist, with timestamps: https://gist.github.com/soatok/024f80b8377de4bf9d0cb2d7e57b1eed/revisions
That edit makes it mildly better, but setting an arbitrarily short deadline, unilaterally disclosing and the inflamatory tone all still combine to produce an incredibly bad look.
from soatok's article:
The last time I reported an issue to Matrix, they insisted on the full 90 days and then did jack shit with that time. They didn’t even notify the developers of other Matrix clients that it was coming.
the last time soatok did security research and responsible disclosure for matrix, they... did nothing. Why should soatok wait 90 days before publishing their findings publicly this time? That's not how that social contract is supposed to work: the other end responding accordingly is necessary if you want independent security researchers to continue to engage with you in good faith and peacefully.
(disclaimer, soatok invited me to lobste.rs, I'm obviously biased in my opinion here)
As someone that has been on both sides of security disclosure, and done active incident response, I don't agree. While I'm not a fan of how the Matrix Foundation has handled these reports, that doesn't change that I don't believe how Soatok has acted is acceptable. I don't really have the energy to argue, though.
To be very clear: When I discovered a vulnerability that affected Synapse, I reported it to them via backchannels. When it had not been fixed in a thirty-day timeline (for a talk I was giving), I tracked down the Synapse lead and checked that continuing with disclosure was OK and got the go-ahead. When I gave the talk I did not throw jabs at Element or Synapse.
Nah, it's fine. The reason to coordinate disclosures is to avoid any kind of ill will on part of the service provider; one doesn't want to be issued legal threats or denied a bug bounty because they failed to follow some arcane preregistered process. The reason to disclose at all is because, in the good old days where we'd just quietly email a security address with a PoC and never hear back, service providers didn't actually fix their issues; disclosure forces the provider to do something about it.
Also, impact doesn't seem that big, frankly. As the blog post explains in the section about impact, the buggy library isn't even the dominant one in the ecosystem; the other library, which is also buggy, is still quite popular. There are multiple threads here on Lobsters pointing out the non-exploitability and harmlessness of some of the reported issues.
Meanwhile, I'm not sure whether you're aware of this, but Lucky 10000: there are hundreds of cryptographers with Soatok's skill and thoughtfulness who are selling this sort of detailed explanation and PoC to criminal conspiracies. I don't know where your floor is, but I'd hope that you can tell the difference between a disclosure timeline which you personally aren't comfortable with and a disclosure timeline which involves profitable criminality.
What happens if you set one of the inputs to zero?
If you can imagine the consequences of this, congratulations, you understand the vodozemac vulnerability.
I don't. I wish Soatok explained the identity element vulnerability more to people unfamiliar with the Matrix protocol. Why exactly do we care about the result of the key exchange not being the identity element?
RFC9160 is cited, but it's just brought in as an example - does Matrix even implement it? Or is the claim that all higher level primitives built on top of X25519 must do this? I don't really know how to interpret that section.
I'm not a cryptographer, maybe I'm missing something obvious.
edit: To explain what I meant with RFC9180 (which I typoed, oops): when he said that "KEMs require contributory material", I assumed the protocol described by the RFC might have some weird attack, where controlling the result of the X25519 key exchange could leak the KEM keys... or something like that.
Earlier in the article, he mentioned that he
provided the Matrix team with a patch [...] which demonstrates 3 different ways to trigger this condition (including one accessible to remote attackers).
which made it sound like there's, well, something for said remote attacker to attack, besides their own communications.
I don't. I wish Soatok explained the identity element vulnerability more to people unfamiliar with the Matrix protocol. Why exactly do we care about the result of the key exchange not being the identity element?
Did you keep reading?
If you multiply anything by zero, you get zero.
If your "shared secret" value is simply zero, your KDF has only public inputs. This means an attacker can read all of your messages even though your app tells you you're using encryption.
EDIT: For full disclosure, Soatok shared the draft with me over Signal earlier this week and I gave him some minor editorial feedback.
Did you keep reading?
<...>
Above, the notion of "anything time zero equals zero" eluded you and now you want to delve into cryptography protocol attacks in depth?
Is this propensity for wanton verbal attacks a common trait of cryptographers or is it just you and soatok?
I can see how you might parse it as a personal attack (the Internet is prone to sarcasm), but I was genuinely confused and trying to communicate authentically.
Who is the attacker here?
If I'm understanding correctly - Alice and Bob perform a key exchange. Bob's public key is zero, so the derived key is zero, and anyone can decrypt the messages between Alice and Bob... and?
How do you get Bob's public key to become zero? I assume there's some authentication in place, so Malcolm (in the Middle) can't swap out Bob's public key to be zero - otherwise they could just as well swap it out for their own public key. Presumably, Bob must have then intentionally sent zero as their public key. But why would Bob do that?
Sure, that would result in the messages in Alice and Bob not being protected - but there's a billion other stupid things Bob could do to leak these messages. This doesn't sound like a vulnerability in itself.
There are protocols that require contributory material, such as the cited RFC9160. I'm not going to pretend to know why they do that, although I was planning to read it after reading olm.md.
So my question here is - how does Bob harm anyone except himself in this scenario?
How do you get Bob's public key to become zero?
You can't; the public keys are signed. So you can't swap the keys out and force non-contributory behavior, which is why Matrix (and Signal, from memory) doesn't do this check.
Signed by who? The homeserver? Matrix.org?
What's stopping the homeserver from discarding the key you uploaded and signing a publication of all zero?
signed by the user. the server could replace the keys on the server with keys signed by a malicious identity, but that’s a general impersonation attack which you mitigate by verifying user identity out of band (by QR scan or doing a out of band authentication string comparison (eg emoji comparison)).
How do you get Bob's public key to become zero?
There's typically an assumption that an attacker can temporarily compromise the public key distribution (e.g. through a government subpoena) and get the server to lie to Alice about what Bob's public keys are supposed to be.
Alternately, you could register with a malicious home server that always resets your published public key to zero.
If the server can publish your public key as zero, surely it could also just publish your public key as a key it generated and therefore has the private key for?
the notion of "anything time zero equals zero" eluded you
what
(editing because I accidentally pressed enter)
In your example, Troy is invited to the group, and is willing to compromise the secrecy of the group. Why wouldn't they just... release the messages themselves? Or publish the result of a legitimate key exchange?
Does this actually benefit them in any way? Does this hamper deniability or something?
Why wouldn't they just... release the messages themselves? Or publish the result of a legitimate key exchange?
I want you to read my post again very carefully.
Software bugs, cosmic rays, etc. happen. Technically (though practically impossibly), their OS RNG could generate an all-zero secret key.
All Eve needs to do is notice it.
Software bugs, cosmic rays, etc. happen. Technically (though practically impossibly), their OS RNG could generate an all-zero secret key.
Making another reply to note that this seems to be what Soatok had in mind, based on his reddit comments.
Yes. I thought I made this clear upstream but: He and I did discuss this write-up before it was public.
The odds of a random X25519 key being zero are roughly 2^-255. The odds of randomly guessing the result of a key exchange are also roughly 2^-255. (Give or take, I'm evidently not good with multiplication.)
The odds of a random X25519 key being zero are roughly 2^-255. The odds of randomly guessing the result of a key exchange are also roughly 2^-255.
Well, it's closer to 2^-252 because of the cofactor but yes. Thus, "Technically (though practically impossibly)".
I was responding more to the cosmic rays part - my assumption was that a random bit flip in the key isn't going to change the distribution of the keys, but I suppose the bit flip could also mess with the code in a serious way. More broadly, I thought that a bug resulting in an all-zero key isn't more likely than a bug resulting in any other "weird" private key, but that obviously isn't correct.
So I guess the impact of this issue is that if you have an utterly broken RNG that gave you an all-zero identity key and and an all-zero one time key, then people can decrypt your messages?
Yeah and if you join a private group with a nulled key, the instance admin can decrypt even if they're not a member.
Whatever you think of Matrix, you have to respect the purity of Soatok's hate. Clearly someone who just really loves the game. A real poster.
Matrix E2EE is actually really secure. Because every room sooner or later becomes nothing but "can't decrypt message"! Can't leak what, whose keys have been lost to the void.