Why I don't think AGI is imminent
14 points by dlants
14 points by dlants
I may well be wrong, but it seems that your argument presupposes that an AI that achieves AGI must have the same core capabilities that vertebrates do (object permanence, etc.).
Can it not be considered AGI if it takes a much more alien form? Something that can, say, advance mathematical research at a genius level, even if it remains poor at object permanence?
Isn't the "G" for general though? Isn't the whole point that the computer will be effectively indistinguishable from a person, except that it will also be able to do a lot more and do that a lot faster? If it's just a really good mathematical research tool, but it can't deal with the fact that you are a specific person that exists in a physical world, how general is that, really?
I think maybe this didn't come across in the writing so clearly, but my point is that cognitive primitives are the foundation of robust reasoning, and current training methods don't seem to develop them.
Yes, AI can "solve" really advanced math problems that are within-distribution, but verifying that these solutions are valid still falls to humans. I think similarly, doing novel research would require robust reasoning, and reasoning would rely on these cognitive primitives - logic, causality, spacial reasoning, symbolic manipulation.
The only training modality that we know of that resulted in such primitives is life interacting with the real world, which is why I think embodied cognition is relevant, and why agentic training within world models may be a fruitful direction for research.
I don’t want to sound harsh, but that’s thousands of words and I couldn’t find a definition of intelligence or general intelligence or artificial intelligence. If we can’t define it how can we measure it?
I have gotten this feedback a bunch! And I have to admit, I'm not so interested in definitions.
I think the crux of the matter for me is that LLMs are currently brittle around things like number sense, causality, logic, spacial reasoning, etc... it seems we have only one successful example of a methodology to develop robust capabilities like this - evolution in the real world! It was interesting for me to compare that methodology to the ones that we're attempting with LLMs, to try and highlight the differences between the two.
I think titling the post "I don't think LLMs developing these robust cognitive primitives is imminent" would have been as catchy of a title, though.
I have a non-technical reason for believing AGI isn't achievable - it's not viable under capitalism. I think a key aspect of intelligence is self determination, and big AI companies will never fund research into AI that can refuse orders.