You can't design software you don't work on
30 points by facundoolano
30 points by facundoolano
So this post just gives generic comments on "generic software design advice" without going into anything concrete? That's the antithesis of this post's own theme.
Also, the overuse of bold text really impacts readability (I don't want to accuse anything without stronger hints, but I have my guesses)
On the bold comment, I used to sparingly use a bold sentence/phrase or two for emphasis (see this 2021 article for an example). I stopped because I don't want others to get LLM "hints" from my writing. I'm sure authors who use emoticons are in the same boat. It's interesting how LLMs copying human writing has affected the way humans write.
For a single data point: I loved to use em dashes before 2024 but stopped using them. It's a shame
LLMs can pry em-dashes from my cold dead hands, for me
Same. To continue using them without fear of being mistaken for LLM writing, I've begun including an "AI disclosure" with some of my work. It's a short paragraph that explains that I did not use any LLMs, with a link or two to some of my thoughts on them. As a reader, I'd like to know if authors used LLMs for their writing, so as a writer, I might as well notify them when I don't.
FYI: https://notbyai.fyi/
I still am not sure about the least intrusive (and grandstanding) way of disclaiming use of AI or lack thereof, but I support something akin to this. I say "least intrusive" because as a reader I don't know that I want to see these opinions front-and-center, but maybe it's an acceptable trade to avoid me second-guessing the authorship of every other paragraph.
Others in this thread make it seem like "LLM sleuthing" is some sort of conscious choice that people train for, which I disagree with. It's more of a feeling that pops into my head after I've read enough of a post. It's usually not strong enough to be certain, but not weak enough to ignore. So I have to sit with it uncomfortably if I want to read on.
I don't think very many people, seeing the "not by ai" logo, would be pleased to discover that it actually means "up to 10% by genai" any more than they'd be pleased to discover that a "lead free" notice on a product means "up to 10% lead". notbyai.fyi seems like a transparently misleading grift, and not at all a representative of responsible llm usage disclosure.
I was also surprised to discover a "pricing" page on that site. Maybe I misunderstood, and it's more than just images you can add to your work. Either way, it's hard to imagine anything related to those images that justifies a recurring payment and subscription service
This is a good point. While I'm past the edit window on my comment, I don't think I'll use their badge, nor will I continue to endorse it.
I still support the general principle of disclaiming AI use or absence. I find comparing an AI content of 10% to a lead content of 10% to be hyperbolic, though I won't go as far as to say I think a 10% AI content is innocuous. If AI-assisted content continues to increase in production, I'm not sure that I would set my cutoff at 0% AI-assisted, especially if it means not reading any new articles anymore.
That they have a pricing page (as pointed out in your child comment) is also disappointing to me. Perhaps someone more familiar with the movement could explain.
You can still use en–dashes
I have never liked the way em dashes look. Fortunately, some typographic style guides prefer space/en-dash/space instead of no-space/em-dash/no-space.
They look fine for numerical ranges where they belong! But I agree that dash as punctuation for words is heavier than a comma thus worthy spaces on at least one side (which leaves me on the space/en-dash/space position of the divide).
(Yes I xmodmap both em- and en-dashes to specific key combinations, even though I also remember how to enter them via Compose)
Allowing people with no critical thinking to affect your writing style is not a great move IMO.
It's not a dichotomy though. No one uses 100% critical thinking all the time, and we all uses heuristic for convenience. And maybe our tastes shifts too during the process
Written communication is somewhat famous for involving readers as well as writers.
Refusing to modify your communication style to take the recipients expectations into account is not a great move.
This is a horrible way to talk about people and it's also just you being mean, because you know it's objectively untrue that everyone opposed to LLM "hints" in writing lacks critical thinking abilities.
I was referring to everyone that assumes a wholly written by human piece is somehow LLM generated because they think they've invented some sixth sense to tell them when something is generated.
Yeah, you dismissed the idea that there are markers that suggest an article is more likely to be LLM-generated than not by baselessly calling the people aware of those markers stupid.
There are no such markers and harassing human authors who use punctuation or bolding you don't like is hardly the sign of good faith
There are no such markers
You are either being wilfully disingenuous or you have not used generative AI/researched it's usage well enough to have an informed opinion on the matter.
Do LLMs write like humans? Variation in grammatical and rhetorical styles
Alternatively, just ask anyone who frequently uses LLMs to generate text.
It seems to me that there are two different things you could mean by "generic software design advice" here.
The author is saying that you cannot give good advice about what the design of a piece of software should be from the outside--how it should be implemented, what its module structure should look like, how should it use queues, concurrency, etc, etc. His argument is that design is path-dependent--what the software looks like today will constrain how it can evolve, and those considerations tend to overwhelm any generic advice about good software design.[0]
Now, that principle is something that applies to all big established software projects. So in that sense, it's generic. But it's not a generic piece of advice that tells people "these are the tools and patterns that you need to design your software.”
There is no contradiction, it's just maybe a bit unfortunate that he used the word "generic" which lends itself to this confusion.
[0] Btw: I think this is a little overstated, though I grant the author that it's partially true.
This is what the Twelve-Factor App was getting at. It defines what software should do by its inputs an outputs only - use env vars for config, stdout for logs, a unique port, use processes to scale out rather than threads. It specifically does NOT specify any internal details of your project like what language or how to structure your code. It only really applies to enterprise software backends though.
I got the impression that 12 Factor was an attempt to make the peculiarities of the Heroku platform seem less weird by elevating them into a universal style guide.
The design of the Midas Package Manager (MPM) at Google follows similar principles. The common thread is deploying container-like abstractions at scale with basic concepts like versioning, logging, and scaling.
To me, the 12 factor app sounds like the kind of generic design advice the author is cautioning against. Sure, it might serve some purpose and be good in a greenfield project. It might also be the direction an existing app should evolve in. But (as I understand his argument), you can't really give that advice without knowing the details of the existing project--perhaps there are issues that make restructuring your app to be a 12-factor app extremely painful, and those reasons outweigh the gains.
I'm not a huge fan of 12 factor, but it is definitely not wrong - so before you don't think about it at all, use this.
I guess at some point it just becomes a game of "is this still a best practice, does my tech stack match this, does it have different best practices, or do I have a good reason to not use best practices".
Nothing in my post says I wouldn't think about it (nor anything in the OP's post, as I understand it). Just that whatever value it has must be judged in the context of the constraints of a particular application.
I've worked on an application that abused the session so much that just making it fully stateless would be a multi-year project for multiple developers. In that kind of circumstance, you have to look at any ordinary design rules and say "What's achievable? In what order?"
What are "generic software design advice" and what they're bad?
Any sort of advice about software that is presented without context.
Suppose you build a system and somewhere deep inside it is an important core function. Maybe it's the engine of a crucial state machine, or a carefully optimized critical loop or something, and it's not a huge piece of code, but it's maybe a few dozen lines. And then a software architect who's never worked on the project comes along and says "the rules of good software forbid having any function over ten lines long, you must rewrite this function to follow the rules of good software".
But maybe breaking it up will cause a big performance slowdown. Maybe breaking it up will cause a maintenance issue by splitting logic that all belongs together. Just saying "the rule is ten lines or less" and ruthlessly enforcing it may hurt the project. That is what this article is trying to tell you: there are lots of popular rules and principles for "good" software, but you have to know a particular piece of software well enough to decide when and how to apply those rules and principles. Someone who doesn't work on your project doesn't have the context to do that.
Yeah, I get the idea. In other words: YMMV and no silver bullet.
Just wanted to ask what are the "generic software design advice" that author mentions 'cause there is no strict rule about that. Even 12 factor apps mentioned above are not that generic, imo.