llama-conductor is a router + memory store + RAG harness to force models to behave like predictable components

3 points by Yogthos


Corbin

It's RAG and prompt engineering. It cannot force models to behave in certain ways. The author has been spamming Lemmy with their LLM-derived rants about how great the tool is, e.g. here, but I already concluded a few years ago that this fundamentally can't prevent confabulations. Previously, on Lobsters, or previously, on Lobsters, we discussed how confabulation arises from the epistemic constraints which are structurally built into modeling language.