Cybersecurity Looks Like Proof of Work Now
13 points by zk
13 points by zk
After thinking about it for a bit, you could frame cybersecurity before as being about spending more on security engineers than the attackers. So maybe it's not that fundamental of a shift.
Came here to post this very idea :) It's not a shift, it's an additional cost, though. You'll still need security engineers, only now, it seems they will need large token budgets to be successful.
If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
You don’t get points for being clever. You win by paying more. It is a system that echoes cryptocurrency’s proof of work system, where success is tied to raw computational work.
While I can see the parallel of "proof of work" is definitely there (i.e. the "raw compute" of the models themselves running), I personally think the more obvious takeaway here is that it's more obviously "pay to win" than ever before.
I have a big problem with this kind of claim.
Everyone seem to think that only the LLM can fix the LLM problem. But like. This is not like we do not know how to harden nearly everything, it is just that we never tried it, because it meant having to come down from cyber security high horse.
We know from the Rust experiment at this point that the solution to hardened system is to build tools that make it easy to build them and to support and help devs when they deal with it.
But no, instead we will keep spending a massive amount of money and power into a random pattern machine, because doing otherwise would mean accepting that yelling at devs to do better is not working. And that just doing "cybersecurity" means becoming a dev team that build these tools across the industry with open source.
I am tired man. So tired.
We know from the Rust experiment at this point that the solution to hardened system is to build tools that make it easy to build them and to support and help devs when they deal with it.
My impression of Rust is that it recongises that there is a floor of how easy planning the correct thing can be done, and flat-out makes doing the wrong thing harder whenever the right thing cannot be done easier. And doesn't shy away from saying «you should not do this thing that way» whenever applicable.
And generalising that last part too far would also mean that people who can understand what a «logical contradiction» even is would also need to be allowed to push back on inconsistent and unsecurable requirements. Which is contrary to the entire point of Big Tech as an economic sector, apparently.
That is missing half the point.
Which is making it easy to do the right thing, not only hard to do the bad thing.
Like. It has a unit test framework, a build tool, it has error messages that makes sense. Pattern matching. Etc
These are all ways to make it easier to do the right thing, which the vast majority of our infrastructure does not have access to.
My view is, companies not gonna care about cybersecurity unless it hurts them in the monies.
From what I heard, "High Assurance Systems/Computing" is not exactly a new thing; there were processors doing impressive things - but Intel still won because faster and cheaper. It keeps looking like this to me - companies prefer cheap and fast than secure, and they keep getting away with this. When it starts being painful to them enough, then they actually start doing something more than pretty words, security theatre, grandoise yet unbased claims, and (as you say) yelling at devs to "do secure" - "just remember to actually do cheap and faster, forget secure".
I may be jaded though.
I think you miss the point. Rust did not come in because companies wanted to do it. It became an industry thing because devs wanted to use it.
If you make things far easier to use and produce working software for the hobbyists maintainers, which maintain at least 50% of the code running in production, they will adopt it. And they are the source of the vast majority of software running in prod that is breachable.
And then they will find a way to sneak it in at work and at that point, surprise it works.
the problem is that this is not "High Assurance Systems". This is about making the tools easier and faster to use for the devs. Once again, look at Rust not as a "memory safe" language but as one that has infinitely better ergonomics and tooling than C.
Hmmm, ok, that's an interesting take and thought - and indeed, I missed it; thank you for patiently re-explaining!
It is a pattern you will find if you look at "old devs" looking to test/try Rust. They regularly report in their "thoughts after trying" things like
There is more, and these may sound obvious to someone used to code in JS. But for the greybeard around, these are actually amazingly big improvements. They do not always highlight them as the important part, but they all mention it in their report. The fact it is always mention is the surprising thing that makes it matter.
TCB minimisation from the «clever» approaches should still be valuable to improve dollars-per-line given a budget, no?
Also, their OSS-is-important argument means low-churn OSS is even more important.
Also, the bespoke implementations don't just have the lower payoff per exploit than widely deployed systems; a bespoke system is typically chaotic enough that distinguishing exploits from tripwires from outside is harder than for a widely available system — not necessarily an open one — that the attacker can run on their own in a fully resettable way.