Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health"
92 points by Helithumper
92 points by Helithumper
This sucks, but having followed the blogging for months now, it's clear the curl maintainers are getting overrun by stupid shit. For every one real report (AI assisted or not) there's a stack of ones reporting vulnerabilities in functions that don't even exist.
We have two bounty programs. Mozilla websites/infra on HackerOne and Firefox bounty on bugzilla. Both get slop but Firefox is doing better and gets less slop reports. I think this is because of the higher cost to reporting. Not $$$ cost, but because bugzilla is a bit discouraging and annoying to use.
I suggest people think of their barrier to reporting in the context of slop reports.
As a "way out" for people who need to do scalable reporting, we still have security@. It has an auto-reply for essentially everyone that explains them how to file bugs using bugzilla. Most incoming email will only see the auto reply. But those that deserve an individual responses typically get it.
Good on the cURL maintainers for taking the steps they need to take. I do wonder how the security scanning exploit community will survive this, as my experience with a larger API with HackerOne is most of the reports were garbage before LLMs. They'd find non-dangerous behavior and report it, which we would fix, but the bounty would be low because....the thing they found wasn't dangerous.
The source PR linked in the article: https://github.com/curl/curl/pull/20312
Couldn’t post the PR directly as the article link.
Maybe a related phenomenon: ggml prohibits vibe coded contributions to llama.cpp
via Brian Campbell
I think an important distinction is that this decision by curl isn't a targeted ban: it impacts both LLM spammers as well as good-faithed human contributors. I see more parallels to the new tldraw contributions policy which due to an influx of low quality LLM contributions closed the gates on everyone—ironically in favor of the maintainers using LLMs themselves.
this decision by curl isn't a targeted ban: it impacts both LLM spammers as well as good-faithed human contributors.
This is variously called an externality, the tragedy of the commons, or why we can’t have nice things.
It's ironic, but it's a consistent position. If you want to use LLMs but think that they're a difficult tool to use right, or you want to be 100% sure when you are and aren't interacting with one, then closing external contributions (or closing off unapproved LLM usage) makes a lot of sense.
I wonder if some sort of 'tiered' system similar in spirit to Wikipedia's protection policy would have helped prevent something like this.
Specifically, in a similar way that pages that have Extended confirmed protection (Bluelocked) on Wikipedia can only be edited by accounts that have existed for 30 days and have made 500 non-reversed edits, important projects like cURL would not accept bug bounties from new accounts but only those with a track record.
...actually never mind. At the end of the day, on Wikipedia even editors who can edit Bluelocked pages, if they start acting in a bad-faith way, can get banned from the site. GitHub certainly has no interest in repremanding users who waste people's time with spurious bug bounties (let alone banning them).
HackerOne is also, as it is common now, an AI company
Secure at scale with humans + AI
So I would be very surprised if they do anything related to this problem.
If curl's problem is this bad, I wonder if malicious companies also have this problem. The Zerodiums/NSO Groups/etc. of the world.
I've started get AI slop PRs on https://github.com/FreeRADIUS/freeradius-server/
It's clear that the people didn't even bother testing the code. The path forward there is to not just close the PR, but to ban the person involved.
AI slop isn't just garbage, it's abuse.