An AI Called Winter: Neurosymbolic Computation or Illusion?

14 points by tumdum


Corbin

It's illusion. Specifically it's the ELIZA effect.

There are three primary categories of thinking, at least. (Transformer-based) LLMs are memetic, primarily using System 3; they do not have System 1 hormones or System 2 rumination. Fusing Datalog and LLMs is fusing System 2 and System 3; the resulting system doesn't have any basis for emotional stances or intentionality, regardless of whether the internal state includes syllogisms or deductions. Previously, on Lobsters, I gave a rough rubric for estimating how inundated a person is with LLM-sourced memes.

Lojban also has an s-expression representation. So you could imagine, for instance, translating text into Lojban and evaluating directly into something like propagator constructors, and then you'd have something where you could do some amount of testing and extrapolation of the statement. If you did something like that, in particular with a database of well known relations, maybe you could do something interesting.

Lojban is merely a syntax for second-order logic pre-equipped with a basis of a few thousand semantic relations. The strong Sapir-Whorf hypothesis is false; Lojban does not grant its speakers the ability to think logically nor does it transform its speakers into logicians. Ontology in Lojban is not easier or more computer-legible than ontology in English. Lojban's advantage is that it is monosemous, refusing to attach multiple meanings to individual words; the bridi {ro tirxu cu mlatu} is more precise than the sentence "all tigers are cats" because {mlatu} cannot be misinterpreted as e.g. "housecat" or "small cat", nor can {ro} be misread as not meaning "all" in the sense of the universal quantifier. All of the hard problems are still present; for example, there is ongoing research into how to set up discursive logic in Lojban but no complete formula yet.

Previously, on Lobsters and previously, on Awful, we discussed Cyc, which is the largest such attempt to build a big database of S-expressions and apply logic to them; not only can it not keep up with LLMs, but it was kept proprietary and mostly found use in the war on terror as a basic version of the sort of tool that Palantir now sells. Maybe it's the sort of thing that must be built with thought and care, in which case Cyc shows us that it will take decades to build.

I suspect some people will lose respect for me for "taking a bot seriously" in this way.

Nah, it's temporary. All you have to do is recognize that the bot is, in fact, a generative stochastic token machine; it's a bag of words, as discussed previously, on Lobsters and previously, on Awful. Then spend a few evenings in cult deprogramming, which can be done solo or with a licensed therapist; this is mostly a matter of memetic inventory, checking where each of your core beliefs came from and how you justify and ground them.

Sorry if my diction's not great. I am currently hungry and low on blood sugar because I haven't eaten breakfast, a System 1 problem which can't affect LLMs.

quad

This is article is about developing a classical symbolic AI (tag: ai); it's completely unrelated to LLMs or vibecoding! If anything, it should be tagged logiclangs for its use of Datalog.