The LLM Is Not a Junior Engineer
44 points by carlana
44 points by carlana
First, I need to get something off my chest. It’s fairly common in our industry to anthropomorphize GenAI products and describe them as junior engineers or similar low-level coworkers. Stop it!
I totally agree. It is part of the reason why LLMs are getting out-of-control hyped. They are no where near human capabilities and thinking of them "as human" is a mind-virus that is fueling the fear-mongering of a skynet style takeover.
Agreed fully. Additional aspect to think about - AI agents need external power, unlike real humans who have evolved to be self-powered. No matter how hard you try, AI cannot beat 4 billion years of evolution. It would be foolish to think otherwise.
unlike real humans who have evolved to be self-powered.
I wish. Unfortunately we're still powered by externally produced food. We're not even close to photosynthesis level of self-powering.
In practice were at about the same level - general society feeds you food and feeds AI electricity based on the understood usefulness.
AI cannot beat 4 billion years of evolution
It really depends on what the goal is. AI can and did beat 4 billions years of evolution of Chess and Go players for example. But AI doesn't have to spend time/energy evolving the arm pattern most desirable by other instances for better reproduction chances. The more we limit the scope, the easier it will be to succeed.
general society feeds you food and feeds AI electricity based on the understood usefulness
Everything you mentioned is the product of human work, not something given from an external source. Rarely individual work, but work nonetheless. And we are especially not given food or electricity based on our individual usefulness; you have to work hard everyday of your life to be able to buy it or produce it yourself, or you will die.
In that sens we are indeed self powered.
This is just being useful with extra steps. You buy things with money, people give you money for work, they give you work if you're deemed useful right now or in the future. It's also not a product of just human work, but our whole earth system including plants and animals.
I disagree.
It seems to me your way of thinking this is heavily biased toward very highly developed, very controlled societies, which match to a certain extent.
But most of human history is not that: no one gave us work; there was work to do in the first place to be able to live.
It's also not a product of just human work, but our whole earth system including plants and animals.
Yes of course, everything live in an environment and not in a bubble. You mentioned plant being self powered with photosynthesis, but by definition they don't produce the light they feed on; they use the light of the sun. What living thing produce it's own energy to live without it's environment ?
I think we just trip on the definition of "self-powered" here. If self-powered means human can produce everything they need to live and evolve from within themselves without relying on external sources (an environment), so no human are not self-powered.
But I think that when /u/elobdog say that "unlike real humans who have evolved to be self-powered.", it says that human have the their own agenda; they are self-powered in the sens that they live, they have needs and desires and they can work toward realizing these needs and desires. And in that sens, AI agents are not self-powered. AI agent do not have their own agenda, they don't have needs and desires to push them working toward realizing them.
Or that a machine would even care about the physical world at all. Even we are trying to escape it into the electro-ephemeral.
That’s not fault of some obscure techies who misunderstood. Pop figures like Harrari talk about it as if it is, Skynet.
It's fairly common in our industry to anthropomorphize GenAI products and describe them as junior engineers or similar low-level coworkers. Stop it!
Exactly. I shared /s/hhtuqh some time back where I argued that there are a few principles we should consistently follow when talking about AI in general (which apply just as well to LLMs in particular). To summarise them:
I try to stick to these guidelines when I talk about LLMs myself. For example, I deliberately say 'I queried ChatGPT' rather than 'I asked ChatGPT'. Similarly, I say 'The output from Claude indicates' or 'Claude produced' instead of 'Claude said'.
I would like to see the language around AI become less anthropomorphic and more technical, because I believe that precise language encourages clearer thinking and better judgement.
But alas, ideas like this do not travel very far when I express them on my little website. It would help if more prominent personalities articulated similar principles, so this way of speaking becomes more widely adopted.
I’m currently working with two junior engineers. I like them both, I’m excited to have them at our company. I am the critic-but-use-carefully side of GenAI adoption.
What weirded me out about this article though, was that as it was trying to distance a real life younger engineer from a GenAI I found myself feeling critical of its comparison. I can’t tell to how many times a fact that I shared this morning, strongly with reinforcement, is forgotten as the day goes on. They both are keeping various notebooks (their idea, not mine), compressions of what’s important. I find that as I look over their notes, I often suspect that these compressions lose important details and amplify things that are less important. And their incentive loops ARE different than mine. I’ve been at the company a long time and am responsible to many internal and external customers. They may come to feel invested that way some day, and to be sure I’ve encouraged them in every way I can to feel that way, but the insecurities of youth lead them both to strongly prioritize being seen as getting things done, as knowing the answer when they don’t really.
And yet, they’re very much not the same. This articles attempt to differentiate them, actually caused me to see parrellels. Ironically, I feel as much as before that they (LLMs vs junior engineers) are not the same.
I can relate to your experience with your co workers, only in my case, the people who cause me these troubles are not juniors but instead have been in the industry for ~20 years. The only trend I have noticed is that those who are strong adopters of LLMs seem to be in a habit of delegating their thought processes.
I do not want to make assumptions about your juniors, but I do suspect the current generation of juniors who have probably learned to code with LLMs from the jump are not in good shape to actually learn along the way
Have we even had LLMs long enough for anyone in professional coding to have learned with them "from jump"? Maybe in a few years but not yet I expect?
All of which is to say, if an LLM were a person (to be clear, he is not), he would be an absolute nightmare to work with.
And yet I (and I think many other devs -- most?) if given the choice between a very talented junior engineer, for free, and unlimited free use of your favorite top-of-the-line LLM would choose the LLM. It wouldn't even be close. And you have to start positing extreme scenarios, like a free Febrice Bellard, before it becomes not close the other way.
I have lots of "bad thoughts"™ about LLMs of my own, and agree with the author's, but I think this basic empirical fact needs to be reckoned with if you are going to make a statement like this.
I too advise folks not to anthropomorphize LLMs.
However when considering how to deploy LLMs it’s incredibly useful to compare them to a contributor. For example, if regular humans don’t get to push directly to main then why should LLMs?
Without the experience of being a junior engineer, we don't get mid level or senior engineers. Despite the many shortcomings of LLMs, with clear instructions, they can perform simple tasks. As this write up said, that's great when if you want to do some tinkering on the side, but in a corporate environment, those simple tasks are extremely important for the development of younger engineers. Delegating simple tasks to AI just locks an entire generation out of progressing their careers.
(And delegating big tasks to LLMs is a recipe for disaster)
we still find ourselves in a hole in which every company wants to reap the benefits of training juniors (by hiring the strongest type of juniors, ones with 2-3 years of professional, non-agentic coding behind them) but nobody's incentivized to actually train juniors. this still works for companies with uniquely high retention, but certainly not for most
Unless true AGI with the ability to operate in the real world, things will not change that much; we have productivity boosts here and there and the ability to rapidly generate thrown-away prototypes.
The need for ownership and responsibility of humans make it necessary to understand delivered solutions.; and to be able to understand we must first learn through both practice and theory. We cannot escape that.
As long as humans must be responsible and have agency, nothing fundamentally changes.