What creative technical outlets of yours have been ruined by generative AI?

65 points by addison


I'm watching over my alma mater's capture the flag event, and wow, it is dire. This used to be a creative outlet for me and an educational opportunity for students, but now it's almost entirely uncritical use of genAI tools; challenge developers now are "hardening" their challenges against this, but this makes it totally inaccessible to new players, so it's losing its educational value and enjoyability as well. As a developer, why should I put any creative effort in when I know it's just going to be churned by a machine and not enjoyed by anyone?

Perhaps I just want to commiserate, but I'm also genuinely curious: what outlets for you have been affected similarly by this new wave?

apropos

I'm somewhat bummed about Lean.

Lean is a programming language / theorem prover that's really beautifully implemented, in both its syntax and its semantics. It has the single best syntax out of any language I've ever used, full stop. It's like Haskell, but wayyyy more intuitive, with a bunch of nice changes that all work well with each other, and take full advantage of Unicode, but have fallbacks (at least outside of math libraries) for when Unicode input isn't set up -- it's just so good.

The language semantics are excellent, too. Lean has dependent types, so it has first-class support for being super-expressive in a way Haskell and other languages have had to retrofit GADTs and the like to approximate. Since the whole language was designed around this level of expressiveness, it all slots together so nicely -- ex. there's a type class hierarchy like Haskell, but unlike Haskell, you've got both Monad and also a LawfulMonad that requires you to prove the monad laws hold for any instance of the class, which can be done through Lean's expressive type system and all its theorem proving infrastructure (i.e. tactics). And also, it's got nice do-notation, and syntax-rewriting macros, and insanely good tooling, and etc etc etc.

I like this language.

Six months ago, LLMs got good enough to write serious code. One of the effects of this is that a lot of people who don't know how to code -- cranks, specifically -- took to generative AI after hearing about how Lean is like, a programming language, but also lets you prove things. A bunch of spam and very annoying posts on twitter dot com insinuated -- turns out, while writing code that compiles is easy, writing code that is correct is very hard. However, this also got Lean sufficiently on the radar of AI folks.

One month ago, LLMs got good enough to write math proofs. This is a seriously compelling use of generative AI -- mathematics formalism and guarantees are very useful in a wide variety of applications, from mathematics itself to proving useful properties about programs. And theorem proving can be tedious and boring work. It would be very nice to just need to write specifications and have the proofs fill in themselves... this is the ultimate goal of tactics metaprogramming automation, and LLMs can currently (non-deterministically) do it. Which is kinda crazy cool, to be fair. (It's taken a shockingly long amount of time for this to be possible -- I would have really expected LLMs to be useful as better-than-BFS right away, but it turns out they needed specific posttraining stuff and enough of it for this to not occur until now.)

One month ago, LLMs also got good enough to write seriously correct code. The Lean organization, already being up to their necks in AI from all the funding flowing in since six months ago, of course immediately got on board... and now Claude Code is the 14th most active contributor in the main Lean repo, and growing fast. And why wouldn't it. If you don't think too much about Anthropic's Department of Defence War ties (despite the revoked contract, they're still deep) nor have much concern about consolidating power in a new tech duopoly nor about the ethics of how these models are trained or about their societal-wide effect and what relying on them entails, leaning on language models is an obvious choice -- write more code faster, rely on your type system guarantees for correctness, bingo bongo. Why bother writing tedious code when Claude can write it, and correctly at that?

I hate this.

I do have serious qualms against generative AI. I'm personally the most concerned about brainrot. I'm young! I'm still learning everything -- I'm still learning theorem provers, I'm still learning engineering skills, I'm still learning Lean. I can't use generative AI because it means I will not learn things for myself. But more than that, I have serious ethical issues that prevent me from seriously using or relying on the output of these tools: I do not want to support tech companies that are profiting off of military contracts, that are pushing dependence on their product, that are ignoring consent and destroying the ethos of the internet. These are my qualms; they are personal and I will not be defending them. (But it does make me sick to my stomach to see so many people uncritically adopting generative AI. Both uncritically wrt. social issues and uncritically wrt. whether having a magic answer box is good on a personal / community level.)

Specifically here though, I'm so bummed, because Lean is so lovely to write! And the people who are just generating it -- possibly the majority of users, at the point, but I have no idea -- are missing out on it entirely. I care a lot about languages (of all kinds), and consider programming languages to be beautiful -- works of art, all with different tradeoffs around how the user can interface with computers and describe computation. I really like Lean. I think Lean is beautiful. But Lean is also now the generative AI language. As of this last month, it is written by and for generative AI.

And so now I feel gross writing Lean, and this sucks. Over the summer, learning it, and learning to write proofs in it, and writing tiny little pretty bits of code in it was so lovely. Now, I could just slop out a proof (or a program) if I wanted to. I don't, so I'll keep writing it myself as a way to learn interactive theorem proving, but it's like... the vibes are bad, man!

This summer's gonna feel different.

Anyway, thanks for the discussion question, ~addison. Also, yes, CTFs.

krig

Somehow, generative AI has resulted in a new creative outlet for me: forking projects that start pulling in vibe-coded slop commits.

Other than that, I used to teach a Python course for students at a vocational school but have decided not to do it anymore. Mainly because the level of AI use among students is too depressing to me. I don't understand how we will have, for example, functional medical care in 10-20 years if every student today can't read, can't think and can't function on even a basic high school level without asking an LLM for help.

rileylabrecque

This is slightly different but; Rocket League has been my sport of choice for the past 10 years; it's effectively car soccer/football but the flow is closest to hockey.

Rocket League has largely been spared by cheating over the years, but modern AI has finally changed that where 'bots' are now doing things few players can do and playing at the very highest level, and have started to run rampant.

Obviously cheating in games has always been a problem, but this one hits a little different because rocket league does feel so much like a true team sport to me, and it has been virtually untouched until modern AI practices. The closest analogue is you play in your soccer league every weekend, and then one day the other team shows up with a boston dynamics humanoid robot that can hit 10/10 shots into the top corner of your net from their net, yet is disguised as human.

I think the worst part about it though is there's clearly enough people going "hell yeah, that's awesome, I want to get in on that". I don't think it's ruined yet, and the developers are trying to reduce the bots, but it's certainly on it's way downhill. Why would one even get on the field if the other team is all humanoid robots that no human can match?

At least when someone was wallhacking or aimbotting in counter-strike or what ever, there was still a real human on the other side just being shitty, that's no longer the case which hits a bit different...

(and yes we've had bots in online games for like 30 years, but they've always been very obviously bots, not completely autonomous in such competitive gameplay, or not skilled enough to compete at the highest levels, all of this has changed in the last year or two.)