I let Claude Code configure my Arch install

13 points by willmorrison


hyPiRion

This was one of the things I hoped AI would help me with: Issues with broken Linux drivers and similar issues. You know, the actual boring stuff I don't really want to learn. So I let Claude Code attempt to solve my broken Intel MIPI camera on a Carbon Thinkpad, and after a million diagnostic commands (some which didn't even work), Claude confidently told me how to fix it. I tried it and it didn't work (unsurprisingly), and I repeated this 1-2 more times. At the end of this, my system was in a state where opening cheese somehow caused my bluetooth headset to sometimes disconnect from my machine. The actual solution here was to drag in some newer packages from Debian testing, which I found by using my own brain and a bit of research.

The same experience happened when I tried to get ssh-askpass (or something equivalent) to work on Wayland with Niri: Claude couldn't help me to get it up and running, but deep into some random Reddit thread I found a comment that partially worked, and changed some configuration to make it work on Debian.

It's not that I don't believe the author... but when I read all these stories about how amazing LLMs are at solving problems, it feels like I'm being gaslit. It just doesn't match my experience at all.

threkk

My overall enjoyment has gone down as agentic tools have gotten better. I can feel myself losing skills. I've started projects where I don't use LLMs at all to slow that down. But at work, the productivity gains are real, and I think using these tools is the right decision.

I agree on this. At work we have a huge range of AI tools to support us, and in many situations they are the fastest way of getting the job done, so it is hard to justify not using them as it is faster. However, I also feel my research, debugging, and sometimes even coding skills deteriorating, so for my personal projects I do not use them to keep myself sharp. At the end, life is not a race but a marathon.