Prompting 101: Show, don’t tell

43 points by Gabriella439


Corbin

Fairly good advice. It's also an example of good writing advice in general. Write for the prompt you want; if you want an expert sysadmin then give them part of a Linux kernel log and systemd output which is realistic but tailored for the scenario at hand. If you want them to ask for permission before doing things then don't put "username: root" in there, and pick usernames which are likely to be matched to the behaviors that you want; you can encourage them to sudo or doas instead by giving an example of proper usage. If you want them to think that they work for an elite corporation then display a banner which matches the ethical and ideological values that such a corporation would embody, and you should ensure that the details are likely to immerse the reader.

You can one-shot the example behavior (conversational, short lines, light punctuation) by prompting for e.g. an IRC conversation on any non-RL'd local model of the past 3yrs. This means that the model's context starts with the IRC client header and is followed by a fake OFTC/Freenode/etc. banner, a fake /join, a fake /topic, and synthesized timestamps before every message. After a fake /names, the model can be grammatically restricted to only use (its) real usernames, and a hard token cutoff can be used to interrupt run-on statements just like real IRC. We only enter the room with whatever we choose to bring into it.

hwayne

Similarly if you want an LLM to follow a certain format for output, you get way more accurate results if you show it an example input/output first.

spc476

I can't shake the feeling that vibecoding (or even using LLMs when programming) is more akin to magic than engineering. Or maybe alchemy. A lot of experimenting with different methods, often ritualistic in nature, in order to get the outcome wanted, that is very hard to quantify which method actually works. "Oh, be mean to the AI to get better results." "No, it'll just turn passive aggressive on you." "Easier to read languages work better with AI." "No, those with less tokens work better." "It's neither!" "You can get AIs to code in any language." "It works better with a popular language as there's more examples."

It's magic!

This is not engineering.