What creative technical outlets of yours have been ruined by generative AI?
65 points by addison
65 points by addison
I'm watching over my alma mater's capture the flag event, and wow, it is dire. This used to be a creative outlet for me and an educational opportunity for students, but now it's almost entirely uncritical use of genAI tools; challenge developers now are "hardening" their challenges against this, but this makes it totally inaccessible to new players, so it's losing its educational value and enjoyability as well. As a developer, why should I put any creative effort in when I know it's just going to be churned by a machine and not enjoyed by anyone?
Perhaps I just want to commiserate, but I'm also genuinely curious: what outlets for you have been affected similarly by this new wave?
I'm somewhat bummed about Lean.
Lean is a programming language / theorem prover that's really beautifully implemented, in both its syntax and its semantics. It has the single best syntax out of any language I've ever used, full stop. It's like Haskell, but wayyyy more intuitive, with a bunch of nice changes that all work well with each other, and take full advantage of Unicode, but have fallbacks (at least outside of math libraries) for when Unicode input isn't set up -- it's just so good.
The language semantics are excellent, too. Lean has dependent types, so it has first-class support for being super-expressive in a way Haskell and other languages have had to retrofit GADTs and the like to approximate. Since the whole language was designed around this level of expressiveness, it all slots together so nicely -- ex. there's a type class hierarchy like Haskell, but unlike Haskell, you've got both Monad and also a LawfulMonad that requires you to prove the monad laws hold for any instance of the class, which can be done through Lean's expressive type system and all its theorem proving infrastructure (i.e. tactics). And also, it's got nice do-notation, and syntax-rewriting macros, and insanely good tooling, and etc etc etc.
I like this language.
Six months ago, LLMs got good enough to write serious code. One of the effects of this is that a lot of people who don't know how to code -- cranks, specifically -- took to generative AI after hearing about how Lean is like, a programming language, but also lets you prove things. A bunch of spam and very annoying posts on twitter dot com insinuated -- turns out, while writing code that compiles is easy, writing code that is correct is very hard. However, this also got Lean sufficiently on the radar of AI folks.
One month ago, LLMs got good enough to write math proofs. This is a seriously compelling use of generative AI -- mathematics formalism and guarantees are very useful in a wide variety of applications, from mathematics itself to proving useful properties about programs. And theorem proving can be tedious and boring work. It would be very nice to just need to write specifications and have the proofs fill in themselves... this is the ultimate goal of tactics metaprogramming automation, and LLMs can currently (non-deterministically) do it. Which is kinda crazy cool, to be fair. (It's taken a shockingly long amount of time for this to be possible -- I would have really expected LLMs to be useful as better-than-BFS right away, but it turns out they needed specific posttraining stuff and enough of it for this to not occur until now.)
One month ago, LLMs also got good enough to write seriously correct code. The Lean organization, already being up to their necks in AI from all the funding flowing in since six months ago, of course immediately got on board... and now Claude Code is the 14th most active contributor in the main Lean repo, and growing fast. And why wouldn't it. If you don't think too much about Anthropic's Department of Defence War ties (despite the revoked contract, they're still deep) nor have much concern about consolidating power in a new tech duopoly nor about the ethics of how these models are trained or about their societal-wide effect and what relying on them entails, leaning on language models is an obvious choice -- write more code faster, rely on your type system guarantees for correctness, bingo bongo. Why bother writing tedious code when Claude can write it, and correctly at that?
I hate this.
I do have serious qualms against generative AI. I'm personally the most concerned about brainrot. I'm young! I'm still learning everything -- I'm still learning theorem provers, I'm still learning engineering skills, I'm still learning Lean. I can't use generative AI because it means I will not learn things for myself. But more than that, I have serious ethical issues that prevent me from seriously using or relying on the output of these tools: I do not want to support tech companies that are profiting off of military contracts, that are pushing dependence on their product, that are ignoring consent and destroying the ethos of the internet. These are my qualms; they are personal and I will not be defending them. (But it does make me sick to my stomach to see so many people uncritically adopting generative AI. Both uncritically wrt. social issues and uncritically wrt. whether having a magic answer box is good on a personal / community level.)
Specifically here though, I'm so bummed, because Lean is so lovely to write! And the people who are just generating it -- possibly the majority of users, at the point, but I have no idea -- are missing out on it entirely. I care a lot about languages (of all kinds), and consider programming languages to be beautiful -- works of art, all with different tradeoffs around how the user can interface with computers and describe computation. I really like Lean. I think Lean is beautiful. But Lean is also now the generative AI language. As of this last month, it is written by and for generative AI.
And so now I feel gross writing Lean, and this sucks. Over the summer, learning it, and learning to write proofs in it, and writing tiny little pretty bits of code in it was so lovely. Now, I could just slop out a proof (or a program) if I wanted to. I don't, so I'll keep writing it myself as a way to learn interactive theorem proving, but it's like... the vibes are bad, man!
This summer's gonna feel different.
Anyway, thanks for the discussion question, ~addison. Also, yes, CTFs.
This resonates so much with me. I recently got laid off, and I've been revisiting Rocq. I feel the same way as you. I like writing proofs! But now Claude can spit out a reasonable answer for me. What am I doing? Who am I doing this for?
I love programming language theory, but AI is changing how I feel about it. To me, programming languages are like an intermediate representation that sits between abstract thought and a machine. There's a pretty big gap between those two ends! So much research goes into making languages expressive, concise, and logical in order to close that gap. I used to believe that this research was a necessary good: the smaller the gap, the better the industry. With better programming languages, we can write better, safer, faster software. And like you, I see this effort as a beautiful thing. PL research asks the question: how can humans best express computation?
But if AI is generating the code, it suddenly moves the goal post. The gap no longer sits between a human and a machine, but a machine and a machine. All the typical foot-guns in any mainstream language can now be circumvented by a sufficiently capable AI. Who cares how concise or expressive a language is? It's generated in a matter of seconds! Obviously humans still have to read code, but I just don't know what this means for PL research long term. I've been considering going back to school for PL for a long time, but now I don't know. It's not like PL will die, but I find myself standing on shifting sands.
Who cares how concise or expressive a language is?
… the person debugging an issue nobody managed to even describe in a way that LLM can handle?
Optimistically, this could make people notice that the best shape to write code, the best shape to read code, and the best shape to modify code — they do not necessarily coincide. Realistically, well, nothing constructive is going to happen about that distinction, as we had previous opportunities to notice.
I'm with you on this. I had been eyeing Lean for quite a while, it seemed like a very cool language and ecosystem. Now...
As of this last month, it is written by and for generative AI.
Yeah. I feel gross about it too.
Somehow, generative AI has resulted in a new creative outlet for me: forking projects that start pulling in vibe-coded slop commits.
Other than that, I used to teach a Python course for students at a vocational school but have decided not to do it anymore. Mainly because the level of AI use among students is too depressing to me. I don't understand how we will have, for example, functional medical care in 10-20 years if every student today can't read, can't think and can't function on even a basic high school level without asking an LLM for help.
This is slightly different but; Rocket League has been my sport of choice for the past 10 years; it's effectively car soccer/football but the flow is closest to hockey.
Rocket League has largely been spared by cheating over the years, but modern AI has finally changed that where 'bots' are now doing things few players can do and playing at the very highest level, and have started to run rampant.
Obviously cheating in games has always been a problem, but this one hits a little different because rocket league does feel so much like a true team sport to me, and it has been virtually untouched until modern AI practices. The closest analogue is you play in your soccer league every weekend, and then one day the other team shows up with a boston dynamics humanoid robot that can hit 10/10 shots into the top corner of your net from their net, yet is disguised as human.
I think the worst part about it though is there's clearly enough people going "hell yeah, that's awesome, I want to get in on that". I don't think it's ruined yet, and the developers are trying to reduce the bots, but it's certainly on it's way downhill. Why would one even get on the field if the other team is all humanoid robots that no human can match?
At least when someone was wallhacking or aimbotting in counter-strike or what ever, there was still a real human on the other side just being shitty, that's no longer the case which hits a bit different...
(and yes we've had bots in online games for like 30 years, but they've always been very obviously bots, not completely autonomous in such competitive gameplay, or not skilled enough to compete at the highest levels, all of this has changed in the last year or two.)
Thank you for sharing this. I never even thought about this as a potential problem.
For sure, the comparison to your CTFs is so remarkably similar too, you could easily rewrite my post to just replace it with CTF verbiage and it still would mostly work... Just wanted to try to get something a little further afield than most of the stuff we here are already very familiar with.
I do think it's important to note that Rocket League does feel like a "creative technical outlet" for me. There's skill and mastery, depth in both mechanical input and just being able to quickly read the state of the game, where players are going to go, what they are going to do, where you should be when that happens, etc. The "performance" of how you well you play is the output.
edit: Thanks for making this post in the first place. Lots of interesting commentary!
I didn’t even think about how LLMs would pretty much ruin the fun of CTFs.
Hint: you’re not usually supposed to beat all of them!
You know, back when I was young (leans back in his old-people arm chair), CTFs would only happen 2-3 times a year. So we would archive the challenges and try go 100% in our own time to prepare for the next one.
I haven't played one in a while, but that's such a bummer; I was wondering what it would be like to write one and just haven't bothered because it's unlikely to be fun.
Years ago I wrote one that used Gopher and RSH atop Inferno; I guess going to obscure protocols would be interesting, but it seems so antithetical to the fun of discovery to have to take such measures.
Lots of computer stuff feels less rewarding if you keep thinking that an LLM could do all of this much better and faster.
Maybe key is to just not to think that? ;)
Or if you focus on why you do it, not how or why others do.
I grew up playing an instrument, I was good, but not great. Could I buy music that sounded better? Easily and I did. But that didn't stop me from enjoying playing.
For most of my life I have written code for fun. Some things are useful for others, others are just me exploring what is possible. Could I buy copies of some of this code and have it work 10 times better in 1/10th of the time? Easily and I did. But that didn't stop me from enjoying programming.
I often find myself digging deep into research on some subject I don't know much about, usually because friends or family disagreed and I had to figure out why I believed I was right. Could I give back the Wikipedia answer? Easily and sometimes I did. But that didn't stop me from reading published papers, building Excel charts, and typing long emails to lay out the argument.
Yes, AI can do these things, but that has no bearing on my enjoyment of doing these activities for myself.
Fully agree and it confuses me a bit because I'm finding it hard to reconcile how people can say at the same time that they had fun programming and that since LLMs can do it they find it less fun. For me these feel like contrarian view (in my mind, I'm not saying it's nonsensical at all, but with how my mind works it feels very strange).
If it was fun before, it should still be fun because nothing about the act of you doing that has changed (except maybe in work conditions where you are force to use it). I play Go and Chess even tho a raspberry pi can beat me to it because it's rewarding and interesting in itself. Learning is the same, programming is the same.
It makes me wonder if:
I don't know, I guess probably chess masters went through a similar internal conflict with deep blue, and lee sedol had a similar response to alpha go. But I guess it's encouraging to see that people still play go and chess for fun despite humans being worse at it than machines. There is light ahead.
One potential reason would be if if your mind you believe that you can be "the best" at what you are doing. In that case, I can see how seeing an LLM spit out something might be frustrating akin to Deep Blue, as you note. A drum machine can keep time better than I ever will, but I still love laying out some triplets and paradiddles just because I can.
For me, these have always been art forms, forms of expression, things to satisfy my curiosity. And none of that relied on what anyone or anything else could or could not do. There isn't just light ahead, there is light here, now.
This works for things where the objective is personal edification, yes. But CTFs, for me, were more about seeing the "aha!" in others. I don't fear LLMs being able to do challenges better. I dislike that the culture of a community has become more about doing things better than the "thrill of the hunt" so to speak. I'm not going to tell people how to spend their time, and this was already moving this direction before LLMs (the mega-merger teams I disliked for similar reasons), but this really feels like a destructive acceleration.
Hm, not even "doing things better" but "doing things faster". The elegance of some solutions would be beautiful. Now it's almost entirely hacked together slop, and the people who would make the elegant solutions are too discouraged to play out of fear of not being good enough.
This remains my favourite write-up for a challenge of mine: https://github.com/yarml/Writeups/tree/main/TamuCTF2023/Embedded-Courier I am fearful I will not see something like this again. The quote that best captures how this feels:
what makes CTF challenges more fun is when the solution is not obvious, the more dead ends you go through, the more you learn too, and the joy of coming up with the solution at the end pays up for all the frustration.
I take great pride in writing challenges that accomplish this. I fear I won't see the "aha!" again. I know I probably will, but watching people disassemble my puzzles with robots they pay just to avoid spending time on something I crafted for them just sucks the air out.
Advent of Code feels like a victim of this as well.
How? Does someone else using an LLM help them prevent you from enjoying to solve the puzzle yourself?
No of course not, but part of it is that it's a collective event and show of skill. That part is entirely gone now.
Maybe these kind of events should be proctored and live to guarantee that nobody uses an LLM.
Nonsense. See the AoC subreddit for all the creative energy and passion people put in AoC. The visualizations. The discussions. The YouTube channels. I’m sure there are some folks speed running it with Claude but the community last December was still amazing as previous years. “Entirely gone” is just not true.
What I like to call - 'fun programming' is no longer enjoyable. However I'm enjoying some vibe coding utilities apps. But that turn into yakshaving and not achieving anything meaningful.
I have recently decided to start asking AI to quiz me - domain knowledge, syntax etc. This is keeping me sharp even if at work I'm working with AI for a lot of things.
Can you elaborate about what "fun programming" means? I've been thinking about maybe the same thing for a while, years maybe. There are people who put a dollar value (not even an economic value) on some effort, and then say their job is to provide value to The Business, and nobody should do a minute's more work on some problem if The Business is happy with the current solution. And that drives me nuts. If you've ever done the Combinatory Logic part of Smullyan's To Mock a Mockingbird you know that's not everything there is to programming, and "providing value to The Business is sufficient" is what someone who peaked in high school says. But maybe I've got a different or worse definition than you do. Please let me know!
I remember @munificent talking about it a bit, but I used to really enjoy working on programming languages that could be extracted to other languages, so that I could write code in one and then share that across ecosystems (with the correct model). I've pivoted away from that (and syntax more generally) into some semantics and retro computing (I've been working in PL/I quite a bit), but it does feel like I'm focused more on things for myself than on things I'd share with others. It's fine, but it does feel more like houseplant programming than my original goal.
I also don't care as much about blogging or even social media at all; inauthentic interactions always felt tiring, but now between slop code PRs and "speed up all things everywhere!" I feel like sorta retreating to smolnet and staying there. I deal with LLMs and what not at work enough to not want to automate my hobbies too.
None. I'm making good progress on creative coding projects thanks to coding llms.
I know for some folks, LLMs have streamlined the creative process. The outlets which have suffered for me are those where the process is the objective, not the completion itself.
I'm glad not every outlet has been negatively affected by this, but I think it's important to reflect on what has. There are some experiences which will likely not be the same again, and I'm curious as to what has changed that I wouldn't expect and maybe will never experience myself.
It's not a problem if someone using LLM, but he can understand and explain every symbol of the generated code. Problem is where schoolboy buying LLM subscription for $10/month and using the code what he can't understand at all. And it's real problem, because some special kids going to commit its AI-slop to big OSS projects...
I used to quite enjoy writing tiny scripts or programs to solve some problem or another of mine. They were rarely impressive or amazing, and almost none of them have ever been seen by anyone other than me, but they were something I could take a little bit of pride in: this problem isn't a problem any more because I solved it myself.
But now when I sit down to write one of these things, I pretty much spend the entire time feeling like, "What am I doing here? Claude Code could spit this out in half the time it'll take me." These little programs are usually solving problems that are small and well-defined enough that a one-shot prompt would get them about right.
That's not the worst part, though. When I do work up the motivation to spend time tinkering, I find myself having to fight the impulse to go consult an LLM every time I run into some minor roadblock. Sometimes I fail to fight that impulse. Why spend time figuring this out myself when I could just ask the AI and move on to the next thing? And that really sucks because the little "Aha!" moments when I figure out how to get past past minor roadblocks are where a great deal of the fun of programming has always been for me.
My recreational programming was never about efficiency, but now it just feels like a complete waste of time.
This sounds eerily similar to a story I read recently: https://sightlessscribbles.com/posts/the-colonization-of-confidence/
The revision demoparty is happening in a couple of weeks. The demoscene in general has been getting away with it, likely more because to its niche focus, than LLMs lack of creative capabilities. I wonder if this year it will be different...
Hard agree about CTFs. I didn't have much time to play over the past 1.5 years, but recently came back to help with mentoring our local group, and the difference is massive. Opus 4.6 is able to autonomously solve most challenges (and probably all challenges solvable by junior players), the whole jeopardy format feels dead (especially rev, my favorite category).
In on-site competitions, we're thinking about banning all autonomous LLM tools, but not sure what can be done about online CTFs. I expect the community to go in a similar direction as chess and similar game communities did, using LLMs only for training, declaring their use as cheating in competitions and relying on player's honor to not break the rules in online competitions.
Sadly, I don't think the chess strategy will work. Simply put: we intentionally design challenges to be simpler. It's not going to be possible to distinguish between effective players and effective LLM use scalably.
Im creating "AI traps" in my projects for students. Need to find some problem, what AI will solve in the way human will never do it and just include the problem to the project. The "genius" who will solve the problem by AI (and will not be able to explain how he does and why exactly in that way) will be banned at all events associated with me or my friends permanently with no dispute option.
Literally none of them. For me. YMMV, and I don't write that to invalidate anyone else's experience. However, I want to share my own.
I did wonder if one of my side-projects might be susceptible to being replaced with AI. I've pondered it, but am not investigating further (for now--I might later).