Anubis now supports non-JS challenges
143 points by edwardloveall
143 points by edwardloveall
I love Anubis.
The fact that it makes corporate squirm with its cute anime girl rendition is fun, it levels the playing field against AI scrapers, and it’s driven by Xe. What’s not to love, really?
Cloudflare’s solution, if we can call it that, is to require only the most common browsers on the most common OSes without many extensions, changes or customizations.
Xe’s solution is to try to make something that marginalizes the largest number of bots possible without marginalizing odd browsers, odd OSes, the lack of Javascript, whatever.
I truly wish we had more people in the world who care like Xe does. The Internet should be a place where diversity is welcomed, not punished. Thanks :D
I’m so glad this is in.
Writing software against malicious actors is difficult. I know it’s extra effort to find more ways of beating bad bots; writing one is tough, adding a second one is even tougher. And maintaining this kind of software is also an uphill struggle, having to react to countermeasures by the malicious actors.
But without things like this, essentially many browsers would be dead. And I think the web is better if there are browsers without JavaScript around. I really think that use of JS where it is unnecessary causes a lot of trouble; both excessive resource usage (that has a real-world impact in computers being retired too soon, more resource usage, etc.) and concentrating power in a few powerful corporate entities such as Google. Yes, the browser as the advanced application platform runtime is a difficult thing, but the browser as the content navigation and simple app platform is GREAT; nowadays even very simple browsers such as Lynx are still useful for browsing a lot of sites, and I think if Lynx could work on any non-Google-Maps kind of website is a great thing.
I kinda expected that some protocol such as https://privacypass.github.io/ (not sure if it’s good or not) would become widespread, but this is a thorny problem.
I’m not sure what’s the future around this. The web becoming a battlefield to keep the bots out is a bleak future, and certainly I wouldn’t want that we lose anything such as privacy in the process. I guess we have survived some crisis without major permanent losses to our freedoms and rights, but battling malicious actors frequently has undesirable tradeoffs.
If it helps, here’s the way I’m thinking about it: Every time one of these solutions is made (Anubis existing at all, Anubis working without JavaScript, etc), the problem space is bifurcated. In terms of difficulty, solving the problem of allowing genuine users using niche browsers to get past Anubis is a lot smaller than the problem of trying to keep services online in the first place.
I also don’t want to see the web turn into something that makes us have to log into everything with government ID, and I fear that is the way things are trending in general. I’m just a single barely funded person though. It’s a rough balance to strike.
To be absolutely clear here: I think you and Anubis are on the right side.
Unfortunately, I understand that some stuff might regress in the fight for things I appreciate. And having to click a couple more times on NoScript is something I’m OK with- it’s not as when I have to do so because of unnecessary JS usage. For the moment, Anubis JS usage is necessary.
European countries like Germany require an imprint/impressum to be present in the footer of their website. This allows users to contact someone on the team behind a website in case they run into issues. This also must generally have a separate page where users can view an extended imprint with other information like a privacy policy or a copyright notice.
Once again, this is my pet-peeve: this is an over-interpretation of the Impressungpflicht!
Yes the law is unclear, and only says “service providers” (Diensteanbieter) which is undefined. But most lawyers agree that this refers to commercial entities, not personal websites. Folks will always mention those one or two German influencers who got sued because they didn’t have an impressum on their personal Instagram page. But these influencers legally became commercial entities as soon as they started taking money to promote products on their instagram page!
Don’t get me wrong, this feature is a good thing, especially for commercial entities wanting to use Anubis in Germany. I just always get upset when people put their home addresses on websites because they think they have to in Germany.
However, this feature is useful for France, which requires you to publish at least the hosting provider of your website, regardless whether it’s a personal or commercial website. (You might have seen that on French websites with an English translation as “legal notice”)
Also, unrelated, non-JS challenges are a great addition for somebody like me obsessed with accessibility.
If I run ads on my personal website for the purpose of making money (I don’t, but if I did), would that make me a “commercial entity”? What if I was a comic artist running ads solely to defray hosting costs?
Hm…. I feel like I must be missing something. There’s a meta refresh, and the browser refreshes, and that “solves” the challenge? So why even have a challenge in the first place if every browser can solve it without doing any work? What kind of bot would this protect against?
There’s many invisible checks in the challenge flow (this isn’t documented very loudly for obvious reasons):
In the near future some more heuristics will be added:
As a way of figuring out how bad this is in practice, it’s off by default and managed by the threshold system so that if another check returns “this request is less suspicious” and removes weight, the user will be sent the meta refresh challenge instead.
There’s a lot going on in ways that would make for excellent technical writing, but you know how it is.
What kind of bot would this protect against?
To directly answer your question, bots that aren’t browsers.
If I understand correctly, I could have NoScript enabled and pass the challenge, and maybe even a text-mode browser like links would still pass the challenge. But those are all browsers. In contrast, a single curl command wouldn’t get through, nor would a naive requests.get(...)
from Python.
According to https://anubis.techaro.lol/docs/design/how-anubis-works , Curl and Python’s requests would get through easily, provided they don’t pretend to be Mozilla in the user agent.
For what it’s worth, if you do actually want to block curl, requests, etc you can add a default weight rule like this:
bots:
- name: base-weight
action: WEIGH
expression: "true"
weight:
adjust: 5
# other rules go here
As for why it doesn’t block curl and requests by default, welcome to the land of tradeoffs. If you add such a base weight rule to the mix, then you need to account for all the user agents of all the software that should be allowed to use the service that isn’t a browser, such as the git client, curl, wget, etc. This is a pain and I have yet to complete a set of rules / establish guidance on how to do this. Most of the time administrators don’t have a complete list of everything that should be allowed to communicate with a web service without that tool being a browser. This is especially true with git forges and open source communities.
And, as bitshift said, they’re not pretending to be a browser. If you don’t pretend to be a browser and respect the social contract, you should be fine. Of course it’s all up to the administrator to configure this the way they want.
If you feel especially vindictive, you can also block non-Mozilla requests from getting HTML responses at the application level. There’s plenty of ways to skin this particular cat.
tbh if something is honest in the user agent header then it’s rather trivial to block it anyway
Honestly most bots can be defeated by literally anything they don’t expect to need to do. None of the Anubis challenges are too hard for any bot to do if it was built to do so. What blocks them is that they expect to just get the webpage and they don’t
the proof of work challenges also burn CPU and introduce delays. Totally insignificant for a real human, but probably costly in aggregate for the AI scrapers.
Maybe. In general scrapers have near infinite cpu on the botnets. But one hopes to slows them down a bit
Interesting. The FSF sent this out on 2025-07-03 (subject line “Our small team vs millions of bots”), mentioning Anubis:
Some web developers have started integrating a program called Anubis to decrease the amount of requests that automated systems send and therefore help the website avoid being DDoSed. The problem is that Anubis makes the website send out a free JavaScript program that acts like malware. A website using Anubis will respond to a request for a webpage with a free JavaScript program and not the page that was requested. If you run the JavaScript program sent through Anubis, it will do some useless computations on random numbers and keep one CPU entirely busy. It could take less than a second or over a minute. When it is done, it sends the computation results back to the website. The website will verify that the useless computation was done by looking at the results and only then give access to the originally requested page.
At the FSF, we do not support this scheme because it conflicts with the principles of software freedom. The Anubis JavaScript program’s calculations are the same kind of calculations done by crypto-currency mining programs. A program which does calculations that a user does not want done is a form of malware. Proprietary software is often malware, and people often run it not because they want to, but because they have been pressured into it. If we made our website use Anubis, we would be pressuring users into running malware. Even though it is free software, it is part of a scheme that is far too similar to proprietary software to be acceptable. We want users to control their own computing and to have autonomy, independence, and freedom. With your support, we can continue to put these principles into practice.
I wonder, was this perhaps the motivation for non-JS challenges? Or just a coincidence? Given the FSF’s reasoning — proof-of-work is a form of malware — and the following from Anubis:
In v1.20.0, Anubis has a challenge registry to hold different client challenge implementations. This allows us to implement anything we want as long as it can render a page to show a challenge and then check if the result is correct. This is going to be used to implement a WebAssembly-based proof of work option (one that will be way more efficient than the existing browser JS version), but as a proof of concept I implemented a simple challenge using HTML .
I suspect that the FSF would still not find this acceptable, since the intent is just to change how the proof-of-work is implemented (JS vs WASM).
(edit: off-topic comment: props to lobste.rs for being the first markdown formatter I’ve used to properly format triple hyphens (---) as emdashes (—))
I wish they opted for a user side client option. Just show a value and number of leading zeros expected and expect the users to have a program like this: https://github.com/oriansj-som/proof_of_work
(it is on my work State of Michigan account for a reason)
which produces the answer, which they can just paste into a entry form and submit.
This makes Anubis look like malware. You don’t want vendors to think Anubis is malware.
So it would look like malware to ask people use software provided by the State of Michigan’s Department of Technology?
at this point in history? yes, absolutely. (also there’s no way to tell that’s your work account. also it looks kinda empty, in a suss “i wouldn’t trust it” way.)
Fair enough, the US government really burned a great deal of trust.
The account has a State of Michigan Badge
And the reason it looks kinda empty is due to how hard it is to get approval to publish any code.
I believe badges that show an account’s affiliation with a GitHub Enterprise org are only visible to logged-in members of that org.
That’s asking people to be extremely tech-literate, in a way that’s very difficult. Even assuming you hosted the download on a https://www.michigan.gov/dtmb address, that’s asking people to download and run a random program…to view a webpage.
That would work for trained employees, but the average citizen? Sure, a lot would be able to do it. But a lot wouldn’t. And some would take this as “it’s ok to download things if a webpage tells me to; the state of Michigan said so”.
Plus, what if someone is at a computer they can’t install or run programs on? Or a phone? What if they don’t know how to copy and paste?
It’s asking people who have disabled JavaScript to be tech-literate, which doesn’t seem as big an ask. Have the challenge solved in JS if enabled, show instructions for how to solve on the CLI if disabled.
exactly, it is an optional fallback for the people tech-literate enough to opt out of JavaScript and it is just a government Standard being optionally supported (not even that complex of one; given a seed provide a nonce that produces at least n leading zeros)
You have all great points, and I don’t disagree.
I want to point out this bit
“the average citizen? Sure, a lot would be able to do it. “
I think you’re massively over estimating the number of people that can download, compile, and run some code. Relevant xkcd https://xkcd.com/2501/