I traced $2 billion in nonprofit grants and 45 states of lobbying records to figure out who's behind the age verification bills
119 points by rjzak
119 points by rjzak
For me it says
Post is awaiting moderator approval.
Flagged
It displays for me. Are you signed in to Reddit?
It displayed for me when this post first came up on lobsters but now it shows the same as GP.
Reddit probably flagging it because it’s uncomfortable for them.
An update at the bottom speculates the likelihood that actors associated with Meta have been mass reporting the post.
Here's what concerns me most from a privacy perspective. These bills don't just verify age once. They create a persistent identity layer inside the operating system that applications can query at will.
The commercial age verification vendors who would provide this infrastructure (Yoti, Veriff, Jumio) charge $0.10 to $2.00 per check, require proprietary SDKs, demand API keys tied to commercial accounts, and operate cloud-only with no self-hosted option.
Your age verification data goes to a third-party cloud service. Every time.
I may have missed something just now while reading this, but I was under the impression that part of the reason for doing age verification at the OS level was to not require repeated cloud-only checks with commercial vendors. If the OS verifies a users age, either by self report or with a commercial vendor(only required in NY so far), then the OS only needs to provide the derived age bracket to applications in realtime.
There's still a slew of issues with any sort of age verification legislation like this. I personally don't ever see myself providing any sort of personal information to some random company just so i can use the device I paid for. However, I think theres a world where it's possible that OS-level age bracket self-reporting at account creation could be give parents an easier time keeping their kids safe/healthy online. I'm afraid that's not the direction these laws are going, and I'd still much rather have parents, rather than the government, deciding what content is age-appropriate for their children, I want to believe that with enough pushback, we could end up with some sort of acceptable middle-ground.
The laws in Colorado and California are written in a way that, IMO, makes it clear that someone who cares about privacy was involved in the process (though maybe not someone who understands how computers work outside of the very large OSes). It's not perfect, but those bills in particular seem like a very reasonable solution.
If you want to protect your kids for $20/y NextDNS is a wonderful solution. Multiple profiles, can report which device was doing which query and so on.
This smells LLM-written to me, anybody else?
They have a disclaimer
That's even better than a disclaimer: it's a disclosure!
You know how academic papers disclose funding sources, so readers can use dubious financiers / no dubious finances / lack of disclosure, to judge how much they trust the paper's background?
I think that we should move to a world where we expect essays and blog posts to disclose AI use, if any, and what process they used around it.
AI-using authors can defend their quality control and forestall their critics.
I hope for non-AI users, the insult of having to state "no AI was used" will be outweighed by the status of not using AI.
For the 'decline to say' group, I'd like ‘using AI while refusing to disclose it’ to be as uncomfortable and suspicious-looking as possible. Defend your methods if you believe in them!
Finally, for us forumites, it would spare us a lot of difficult this-smells-like-slop, who-do-you-trust debates if posts could simply be tagged with ‘AI used / AI not used / no AI disclosure’.
I generally have a pretty hardcore anti-llm stance, especially when it comes to things like this where factual accuracy is critical. However, while a lot of the writing was clearly done by claude, they openly admit how they used genai and most of the facts and sources seem solid to me after a bit of snooping.
I actually think this sort of synthesis work is a really good use of llms, as long as a human is still keeping their hands on the steering wheel of course. I do wish they took the time to do the write up manually, I think it would feel a bit more trustworthy if it was obvious they took the time to write it themselves.
Man, I wish massive companies would stop pushing things we don't want on the public. It's kind of exhausting. The author suggests contacting the EFF, FSF, which is the right move, but overall it feels like a long exhausting and losing battle with the constant encroachment of this stuff on our digital lives and privacy.
Let's say all these laws happen and Microsoft, Apple, Google, and the various Linux distributions all bow down to the laws and implement the age verification. What's to stop it from being easily hacked at the computer to force the age verification to be valid? This just smells of short-sighted nonsense at every level.
I tried digging into the sources in the linked repo and found almost nothing. At least, nothing backing up the pile of very specific and bombastic financial claims. You know, all of the mentioned "grants and lobbying records." Kinda seems like a confident hallucination.
Somehow the people of the 21st century have decided that the meaning of democracy is that so long as a group of "lawmakers" is popularly elected, anything ratified by a majority of them is binding on the people, no matter how utterly insane it is.
I would like to see a place where every law, of every kind, at every level (national to local) is subject to something like first-amendment "strict scrutiny":
and any law that fails in any part of that is null and void.
If someplace would like to launch that experiment, I would go and live there.