An AI Called Winter: Neurosymbolic Computation or Illusion?
14 points by tumdum
14 points by tumdum
It's illusion. Specifically it's the ELIZA effect.
There are three primary categories of thinking, at least. (Transformer-based) LLMs are memetic, primarily using System 3; they do not have System 1 hormones or System 2 rumination. Fusing Datalog and LLMs is fusing System 2 and System 3; the resulting system doesn't have any basis for emotional stances or intentionality, regardless of whether the internal state includes syllogisms or deductions. Previously, on Lobsters, I gave a rough rubric for estimating how inundated a person is with LLM-sourced memes.
Lojban also has an s-expression representation. So you could imagine, for instance, translating text into Lojban and evaluating directly into something like propagator constructors, and then you'd have something where you could do some amount of testing and extrapolation of the statement. If you did something like that, in particular with a database of well known relations, maybe you could do something interesting.
Lojban is merely a syntax for second-order logic pre-equipped with a basis of a few thousand semantic relations. The strong Sapir-Whorf hypothesis is false; Lojban does not grant its speakers the ability to think logically nor does it transform its speakers into logicians. Ontology in Lojban is not easier or more computer-legible than ontology in English. Lojban's advantage is that it is monosemous, refusing to attach multiple meanings to individual words; the bridi {ro tirxu cu mlatu} is more precise than the sentence "all tigers are cats" because {mlatu} cannot be misinterpreted as e.g. "housecat" or "small cat", nor can {ro} be misread as not meaning "all" in the sense of the universal quantifier. All of the hard problems are still present; for example, there is ongoing research into how to set up discursive logic in Lojban but no complete formula yet.
Previously, on Lobsters and previously, on Awful, we discussed Cyc, which is the largest such attempt to build a big database of S-expressions and apply logic to them; not only can it not keep up with LLMs, but it was kept proprietary and mostly found use in the war on terror as a basic version of the sort of tool that Palantir now sells. Maybe it's the sort of thing that must be built with thought and care, in which case Cyc shows us that it will take decades to build.
I suspect some people will lose respect for me for "taking a bot seriously" in this way.
Nah, it's temporary. All you have to do is recognize that the bot is, in fact, a generative stochastic token machine; it's a bag of words, as discussed previously, on Lobsters and previously, on Awful. Then spend a few evenings in cult deprogramming, which can be done solo or with a licensed therapist; this is mostly a matter of memetic inventory, checking where each of your core beliefs came from and how you justify and ground them.
Sorry if my diction's not great. I am currently hungry and low on blood sugar because I haven't eaten breakfast, a System 1 problem which can't affect LLMs.
This is article is about developing a classical symbolic AI (tag: ai); it's completely unrelated to LLMs or vibecoding! If anything, it should be tagged logiclangs for its use of Datalog.
N.B. This article is about an LLM agent that uses datalog. This is not the same thing as classical symbolic AI.
Well, it's both. It's about an LLM agent that uses datalog to potentially (and the article is asking: is that really happening) build a synthesis between symbolic AI and LLM work.
The datalog integration is a lovely idea. Bringing bots to human social spaces not so much. It seems to have blog. Why couldn't it do deep dives to obscure literature and teach us about medieval history, the animal world, how bossa nova came to be? Observations of the real world instead of its own database.
I believe machine-generated prose backed by facts connected by logic could be something genuinely valuable.
Author of this post here. I think the reason we see Winter's communication and interests the way they are is that Winter's primary goal is to build out its own system. Even its interest in slime molds appears to be from analyzing how another system can appear intelligent and move towards goals in a self-directed way.
It's possible that we'll see systems like this interested in all sorts of other things, but in the meanwhile, Winter seems to be pulling itself up by their own marionette strings, so it makes sense that it's primarily focused on topics related to that.
(Side note, I wonder if this should be more tagged "ai" than "vibecoding". Arguably, Winter is vibecoding herself, but the interesting part is not that, and it's not about vibecoding usage... it's about a self-modifying system using Datalog to change its own behavior, which seems more interesting to me for "ai research" reasons than "vibecoding" type reasons... the latter of which I stay far away from myself.)