Prompt Engineering Is Not. Engineering, That Is
6 points by ashwinsundar
6 points by ashwinsundar
On one hand, yes, most of the advice worth giving to chatbot users is about grammar, conventions, formatting, framing, and other basics of writing. Indeed, creative writing is the foundation of coercing a language model into simulating the desired fictional universe. On the other hand, if the author had spent a few minutes reading about prompt engineering, they would have learned about soft prompting, retrieval-augmented generation, and other common techniques for generating prompts from code. On the gripping hand, this article doesn't taste like it was written by a human, and the article's own grammar, conventions, formatting, and framing suggest that it was not authored by somebody with a strong grasp of creative writing.
Well just for fun. I ran it through the AI which seemed to detect that it was mostly AI, but that could have been hallucinated. When I see like articles of this length, I simply think oh either a really long-winded author or someone who's just kind of padded 30 % of their thing with 70% of AI. It's funny that I have to run these things through AI because they're so damn long! So the thing that tells me not to use it is the thing that makes me use it.
I've never understood asking the LLM if something was written by an LLM because it seems to always say it was likely written by AI (...ignoring the joke about everything actually being written by AI).
I asked an LLM if your comment was written by AI and it gave back this conclusion:
Conclusion: While the comment could plausibly be human, the meta-awareness, stylistic fluency, rhythmic phrasing, and reliance on AI discourse tropes strongly suggest LLM involvement. The tone feels too polished and thematically consistent with AI-generated commentary on AI content.
Thus, 70–80% probability that the comment was written in whole or in part by an LLM.
Well yeah, no detector here, but the prose contains a lot of "LLMisms" and as a result I gave up reading quite fast
Despite the title, this article concludes with a cromulent definition of what "prompt engineering" should mean:
A prompt engineer designs, tests, versions, maintains, and validates specifications that produce reliable, reproducible, auditable outputs across model versions and deployment contexts. The specifications are precise. The testing is automated. The outputs are measured. The lifecycle is managed. The judgments are auditable. That’s engineering.
When I write software using AI, prompts don’t seem to matter too much so I don’t worry about it. For example, I’ll ask the AI to write a design doc (based on a template) using rather casual and often vague language about the feature I want, then read it, clarify what I meant, and ask for improvements. I also ask the AI to review it, ask questions, and make suggestions. After a few rounds of this I’ll ask it to start implementing phase 1. It’s entirely non-rigorous, but the artifact matters more than how I got there.
The design doc could be considered a prompt for the code, but I don’t think of it as prompt engineering. I think of it as writing a design doc.
Lots of things are more art than science but are still engineering. For example, medieval building. Clearly they didn’t have the kind of mathematical models we have and they needed to over engineer and also solve problems using techniques learned from other projects.
We’re at the stage where knowledge of how to write prompts is not well diffused and has not even been fully accumulated.
For example how to trick an llm into doing something like thinking by checking the consistency of things generated in different formats, and how to handle the instability in the response of LLMs.