AI is Breaking Two Vulnerability Cultures
10 points by gnyeki
10 points by gnyeki
maybe we should consider that a vulnerability is exploitable at latest from the moment when it was discovered? probably we should just try to fix and deploy the patch as fast as possible, regardless of the embargo setup? if i remember correctly, bugtraq led to really fast fixes and precluded large scale exploitation.
Right now there is a surge of latent bugs being discovered due to improved tooling, but those bugs are finite
They're finite as in, we'll find most of the old bugs. But I'm worried they're infinite in the sense that we'll keep producing them for a long time - especially by both extremes: people vibe coding and people refusing even the AI reviews.
Yes we will continue to produce bugs, perhaps at a decreased or even increased rate. But those can be fixed by these non-90d disclosure LLM utilizing actors before a release is cut and the bug gets pushed to users. But the decades old backlog of latent bugs are what we're struggling with now, but luckily that backlog is finite.
Embargoing works when finding the defect is hard but exploiting (or using a packaged exploit) is cheap - so not revealing it gives an advantage to the defense. Models break that assumption.
Seems like we get back to an equilibrium - models find the newly shallow defects and they get fixed?
I'm curious if: (a) Nation states already know about many of these defects and models just commoditize the ability to find and exploit them? (b) Models are good enough at decompilation that hiding source code becomes performative? (c) Models are good enough that not having access to them makes it extremely difficult to ship secure software? (d) Model advancements find more defects indefinitely into the future, necessitating that all software (embedded) needs to be easier to patch?