I let Claude Code configure my Arch install
13 points by willmorrison
13 points by willmorrison
This was one of the things I hoped AI would help me with: Issues with broken Linux drivers and similar issues. You know, the actual boring stuff I don't really want to learn. So I let Claude Code attempt to solve my broken Intel MIPI camera on a Carbon Thinkpad, and after a million diagnostic commands (some which didn't even work), Claude confidently told me how to fix it. I tried it and it didn't work (unsurprisingly), and I repeated this 1-2 more times. At the end of this, my system was in a state where opening cheese somehow caused my bluetooth headset to sometimes disconnect from my machine. The actual solution here was to drag in some newer packages from Debian testing, which I found by using my own brain and a bit of research.
The same experience happened when I tried to get ssh-askpass (or something equivalent) to work on Wayland with Niri: Claude couldn't help me to get it up and running, but deep into some random Reddit thread I found a comment that partially worked, and changed some configuration to make it work on Debian.
It's not that I don't believe the author... but when I read all these stories about how amazing LLMs are at solving problems, it feels like I'm being gaslit. It just doesn't match my experience at all.
My overall enjoyment has gone down as agentic tools have gotten better. I can feel myself losing skills. I've started projects where I don't use LLMs at all to slow that down. But at work, the productivity gains are real, and I think using these tools is the right decision.
I agree on this. At work we have a huge range of AI tools to support us, and in many situations they are the fastest way of getting the job done, so it is hard to justify not using them as it is faster. However, I also feel my research, debugging, and sometimes even coding skills deteriorating, so for my personal projects I do not use them to keep myself sharp. At the end, life is not a race but a marathon.
I will worry about this so long as we don't have on par open source models we can run on our own machines. The risk of intelligence brownouts is real and worrying.
The Qwen 3.5 models are shockingly good - the qwen3.5-35b-a3b one runs comfortably on my 64GB Mac and really does appear competitive with the best closed model of ~18 months ago.
Have you ever tried these models for agentic coding? And not just the local/smaller versions like 35b, but the full 397b versions etc. I'm wondering what we'll be left with when the AI vendors inevitably fully enshittify themselves.
No, I need to try having them drive a coding agent properly. That's been a huge issue with previous local models I've tried - driving a Claude Code requires them to stay useful across dozens or even hundreds of tool calls.
(I compared them favorably to a ~18 month old model, but those couldn't run agent harnesses effectively either.)
If GLM5 gets open sourced it will be great as GLM5 is dmart enough for agentic coding...
it's open-weight, no? you can go download it on hugging face. if you mean releasing the training data.. i don't think that's happening. also GLM 5 is huge, i don't think you could realistically run it on consumer hardware, maybe with extreme quantization. (there's some 2-bit quant at ~241gb that you could technically squeeze onto a max-spec mac studio/pro.)
Aha I hadn't realized that it had been released yet. Sure it can't run on consumer hardware. We need workers collectives to collectively pay for such hardware we can share.
I see, thanks for the answer. I always feel uncomfortable with any workflow that relies on AI vendors remaining reasonable, which we all know to be a very temporary state of affairs.
In your opinion, what is the minimum hardware requirements for running a local model productively?
(Also, relevant to this site's interests, are all local models runnable on FLOSS operating systems?)
If you require very beefy hardware to get access to GenAI, naturally the centralized solutions will win out. Especially as computer hardware components are experiencing a significant price increase at the consumer level, in part because of increased demand from the centralized solutions.
fwiw, i've used perplexity pretty extensively whenever i'm troubleshooting some linux issue or configuring something. last time when i wanted to set up spotify-qt with librespot, or when i was setting up snapper on fedora with btrfs. obviously i try to follow some tutorial if it's a well known thing (e.g. snapper w btrfs) but as some point i'll inevitably deviate.
i have played around with linux since like 2018 but i have an dual gpu laptop so i didn't make the switch from windows to linux full time until summer 2024.
i use perplexity (just through a webui), so i'm running (and vetting) all the commands myself. even if i don't fully understand all the commands, i can generally tell if i like the direction it's going, and if i don't like it, i push back, asking for feedback for the solution that i think might work.
perplexity's nice because it's built around web search, so it usually doesen't make stuff up, but nowadays even claude and chatgpt can browse the web, i'm just used to perplexity.
i think that the main advantage for me is that it saves me a lot of frustration. in the past, i would often spend ages browsing through old archived forums, trying solution after solution, which didn't work, and when i finally fixed it, most of what i felt wasn't satisfaction that i solved the problem, but rather that i can finally stop spending time on this. basically i'm not a huge fan of the act of troubleshooting - i would just rather have the system work as i expect it to :)
before perplexity, i had that one linux friend i would annoy about stuff like this, but ever since he started college and got a job, he doesen't have that much time to help me troubleshoot some linux issue for 4 hours in the middle of the night. (if you wanna be my new linux friend, lmk, i'd love to have more linux friends)
back to the topic of the post: some could argue (correctly) that my approach is not as hands-off as claude code, but i think it's a good thing. I want to learn more about linux, and i want to understand the problems, why they're happening and what the commands that fix them do.
whenever i'm troubleshooting something/setting somethig up, i usually keep a markdown doc of steps i took as i go, in a tutoral/guide style, for more than one reason:
here's an example: https://share.note.sx/qmtzutp3#OgYatCPYeyrbveRbsC5k4TLbJUYV4OOQo3fzyP69PaE
anyway, i guess my point is that llms (especially with web search) can be pretty useful when troubleshooting/configuring linux, but i don't like having claude code/an agent do everything for me.
For me, after switching to Linux and back for the past 10 years, having claude always open in my dotfiles repo made all the difference.
I will debug issues with my install and it will fix what ever is broken today. Made if from an unstable time sink to a rock solid system I look forward using every day.
However, I spent maybe 95% less time on this install than any previous one, and the result is better than anything I've built by hand. I think that's worth it.
Man not only wants to be loved, but to be lovely.
A cool follow-up would be to have Claude dump that knowledge in an ansible playbook. This way now you have a deterministic way to (re)setup your computer the way you like.