I'm OK being left behind, thanks
87 points by gerikson
87 points by gerikson
I've had this feeling since Kubernetes came out. I'm over tech. I'm learning a new career and slowly making the transition into art. I know it won't pay the same but the work is so much more fulfilling.
The smartest thing my old roommate ever said (as a non-technical person) was when I griped about how unfulfilling Kubernetes was
"So it's the computer version of Marx's alienation of labor?"
I also hate lots of modern technology coming from what some have described as merchants of complexity. I think there's still hope and I encourage you to reconsider.
Rigorous, well-crafted technology is simple. But simplicity is hard to achieve. Hoare, Dijkstra, Wirth, and others already made this observation long ago.
Simplicity has lots of qualities. One of them is enabling correctness by construction. In the era of vibe coding and AI slop, I want to double down on that.
For some inspiration, read Concrete Semantics, Software Foundations, or some related literature. See what small companies like Galois or Tweag are doing.
The core argument here of its okay to wait for a technology to mature is totally fine.
I think there is a quite a bit of appealing to extremes or reductio ad absurdum points, and I couldn't tell if that's purely frustration slipping through or if there is a joke being made at the people who sound like this but pro-AI. Not important though, just amusing and wasn't sure what to think of that.
The one argument I don't fully understand is "Some early investors made money - but an equal and opposite number lost money." (And the other examples e.g. HTML vs Flash) So what’s the intended takeaway here? That because there’s downside, it’s better not to participate at all? Opting out is still a choice within the same system since that system is [for better or worse] most of the western capitalist societies. By waiting you’re effectively taking a position on timing and risk, e.g. you come in later when its more mature and take less risk but yield less returns because of it. Again, nothing wrong with that at all, people do this every day, I'm just struggling to understand this point within the broader arguments here.
(Fwiw, I skipped cryptocurrencies completely. I dabbled with it in 2012ish like a lot of other tech people. I didn't find any technical value and still don't and regret nothing about avoiding it. I have friends who made astronomical sums of money from it, good and happy for them. Point being, I'm not on every bandwagon. :) But as a disclaimer I do see value in AI! Not trying to defend it here though, because I'm not someone who tries to push it on people who don't want it. I don't care what people choose. Edit: also totally skipped VR/Metaverse stuff. I dabbled and came to the same conclusion as crypto, mostly.)
This quote struck me too.
If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.
It reads as a reaction to being pushed into adopting AI. Which is happening in some environments... but the broader piece doesn’t really engage with that dynamic. It sort of slips between "no one should pressure you" and "there's no cost to waiting," which feel like different arguments.
The other piece I'd add is that opting out isn’t neutral. Fast-following is a well-worn strategy, but it’s still a bet.
Where I diverge with the OP is that if there is an outlier to that strategy, AI seems a strong candidate. There is a compounding element that’s harder to compress later.
Piling on to your FWIW I've skipped plenty of waves (crypto included - I dropped out completely when the ICO/NFT era hit). No regrets there. I just don’t think "I'll pick it up later if it matters" is always as low-cost as it sounds.
By waiting you’re effectively taking a position on timing and risk, e.g. you come in later when its more mature and take less risk but yield less returns because of it.
That perspective makes sense for simple investment decisions whose success can be evaluated trivially, and can be opted out of at any time.
Life decisions are more like hiking in a hilly landscape. Some choices change your perspective, open up new routes and cut you off to others.
Investing time in using LLMs looks like a tricky proposition, at least from a personal perspective. By its very nature, any skills learned will have a short shelf-life. The software landscape itself is likely to change greatly due to LLMs; even if they fail to deliver on all other promises, they are a UX revolution, and building 2020s-style software with them feels like building faster horses, to quote Henry Ford.
2020s style software is more akin to a lame horse covered in glitter. And also your analogy doesn't even make any sense, it's not like LLMs would make the software product any faster. So far the only half-decent use they have is speeding up the development in well-established fields.
What I meant by that is the practice of using new technology to accomplish old goals, without moving the goalposts. But yes, if you want to take the horse analogy very literally, what we're doing is using LLMs to throw more glitter on the same lame horses. Faster.
In all fairness, more people are employed writing software than ever before, and the goalposts haven't moved much in recent years. Maybe these lame, glittery horses really do represent the ultimate incarnation of the software industry.
But I do think LLMs at least have the potential to move the goalposts. But coding tools are not going to do that.
My experience is that my productivity is currently being measured against coworkers who are embracing AI. My company is hyper-focused on output. So I'm using AI. I suspect the quality of work suffers a little bit versus being more hands on, but I need the job.
The author is retired, and I think hasn't been employed as a software developer for years, which probably changes his perspective a great deal compared to those of us who now have to deal with the expectation that our productivity will increase, regardless of whether this is actually true.
When someone's perspective strikes us as a little too odd, it's helpful to understand the author's background (sames goes for comments here!) We might end up agreeing with the statement after applying the Golden Rule, or we can dismiss it with prejudice e.g. "this person's yelling from the ivory tower window."
Forget being left behind, I want to be left alone
Not a day goes by I'm not forced to think about this awful technology.
for a while i was okay with accepting that my job search post graduation will be more difficult if i stuck with writing software on the web w/ a lot of purpose and intention and care, with minimal use of ai tools. i'm about a year in, and have come to the conclusion that it might not just be more difficult, but legitimately impossible. i can't be ok being left behind because i'm currently being left behind! i agree with this wholeheartedly but i am seriously at a loss for what to do
I will say that the job market is awful for everyone in tech right now unless you're an AI researcher from one of the big labs. I can't tell you not to use every trick in the book but do trust that things will get better, eventually, if our civilization survives the current crisis.
When I graduated in 1990 it took me three years of crappy jobs to find a job I liked and thrived in. In 1998 I left that job due to family circumstances, and from then until 2022, I was generally looking for work every 12-18 months, and job hunting is a full time job, and What color is your parachute? provided some amazing insights into the job hunt process, and parts of it helped me succeed multiple times. I have been really privileged in that I have always been able to walk away from crap employers, but being able to learn and be flexible will always be valuable. I hope for your sake that your conclusion is wrong, and looking at the risks associated with using AI, I think we will see some amazing opportunities to fix the mess that slop creates.
I agree that the "getting left behind" argument is mostly nonsense. It's not like it takes years to figure out how to launch Claude Code and tell it "implement X". If anything, it'll get easier as models become more intelligent, while the patterns we're learning now to make these imperfect tools more effective are probably going to be useless by this time next year.
I could see an argument for learning to "massively multitask", so you can be TREE(3)% more productive with your swarm of agents. But I'm not sure I believe that either, because the human brain's attention capacity is still quite limited, so in my experience multitasking rarely leads anywhere other than many things done poorly.
I don't agree, however, with comparing AI to crypto (as many others have done). It has always seemed quite clear to me that crypto has never been more than a Ponzi scheme dressed up in (admittedly interesting) technical jargon. But AI (well, LLMs) is a completely a different story. If you had shown me 5 years ago what we're able to do now, my mind would have been completely blown. I'd have thought it was magic. I don't see how one can dismiss the current capabilities of LLMs as "a bit shit" and "not as amazing".
I don't see how one can dismiss the current capabilities of LLMs as "a bit shit" and "not as amazing".
That LLMs exist at all is indeed pretty cool. Wow, we can model language in an interesting way, and it can generate convincing responses to some questions!
But LLMs for coding, as products, are not "amazing" - they're products that have specific strengths and weaknesses, and in many people's experiences, more weaknesses than strengths.
But LLMs for coding, as products, are not "amazing" - they're products that have specific strengths and weaknesses, and in many people's experiences, more weaknesses than strengths.
I agree that they have many weaknesses, and they're definitely not human-level in terms of intelligence, but I really don't see how can "amazing" not apply to the current capabilities they have.
A bit of a dumb comparison, but ~30 years ago when Harry Potter came out the idea of paintings that talk and respond to you was considered magic. Now we have technology that can do that and even more. How can it not be "amazing"?
Every time someone says something about being "left behind" I just keep coming back to the fact that when I started making software I was 50 years behind. I caught up in a couple years. Learning at that pace was so fun. Now when people suggest I will be left behind I want to say "do you promise?"
I mostly learn tech that I'm interested in, and I've been fortunate enough that that's led to a decent career.
I ignore a bunch of technologies, and if someone wants to ignore AI, that's fair.
That said, I think the author undersells the benefits of early adoption/moving.
I didn't use Git when it first came out. Once it was stable and jobs began demanding it, I picked it up. Might I be 7% more effective if I'd suffered through the early years? Maybe. But so what? I could just as easily have wasted my time learning something which never took off.
When is it worth learning or contributing to a new technology? Only when it's matured?
I'm struggling to think of anyone who has earned anything more than bragging rights by being first. Some early investors made money - but an equal and opposite number lost money.
Is investing zero sum in this way?
Even if a situation is zero sum, you might still prefer to make an informed decision. What is the alternative? Only pick sure bets?
There's a risk/reward factor here too. A person with an established career can afford to lean more into their career and less on exploration, and maybe has more to lose if they miscall a bet. An early career person has room, and more opportunity, from taking risks.
One way to think is "technology is everything that doesn't quite work yet" and so being in the technology business means frustration and waste with occasional insights into the future. If you want or need to avoid the frustration and waste or you're not in a position to benefit from occasionally predicting and influencing the future then the only remaining reason to dive into tech is curiosity. It's totally OK to focus on non-technical aspects of life. Those are the great majority of what's happening and they're important.
While I agree with the broad stroke argument in this article, I am very confused about this example:
Why should I invest in learning the equivalent of WordStar for DOS when Google Docs is coming any-day-now?
My memory of WordStar is pretty hazy these days, but Google Docs is easily the worst document editing tool I can imagine. It has comments, and suggest mode, and that’s about the only good things I can say about it.
Ha, yes, that comparison rubbed me up the wrong way too -- I hope he was being ironic. I don't remember ever seeing lag when using WordStar for DOS (actually, I used Word Perfect 5.1 for DOS), nor being nagged to use AI tools. Apparently George R. R. Martin wrote with WordStar for DOS as recently as 2020. I use WordStar diamond keyboard navigation on Linux daily.
I used Word Perfect 5.1 for DOS
My kingdom for "reveal codes" :)
Honestly, the Markdown side-effect of LLMs has been appreciated for me. My preferences generally leaned towards AsciiDoc or reStructuredText, but I'm more than happy to settle on Markdown instead of needing to use Word or Google Docs. The fact that non-programmers are learning and accepting Markdown as part of dipping their toes into this has meant I can spend more time in Emacs and less time in WYSIWIG editors, and that's lovely!
Fascinating comparison in the first few paragraphs. In the counter-factual timeline where crypto had turned out to be "the future of money", there would have been a benefit to "getting in early" - if you'd bought a glut of bitcoin when they were available for pennies and sold some for a large profit, your life would be meaningfully better. Whereas the only long-term benefits to being early to AI - when, as the author notes, skills honed now might be useless in a few months on the next iteration - are for companies building tools ("selling pickaxes"), not for people.
Of the smart people I knew that gone in actually early, not a large group. But most got out early, one is retired from work and another could be but is a "secret" multi-millionaire and cut ties with most people that know. I don't have the same sentiment, on crypto, cpu mined a block and didn't keep the key.