"AI" is bad UX
6 points by kevinc
6 points by kevinc
I have a strong impression that the author somewhat paints all the rest of the technology brighter than it deserves.
I do agree with accusations, but not with the novelty of those accusations — although I do agree also that any kind of societal learning has been succesfully prevented.
When personal computers or smartphones or the internet hit the business world it was mostly quite clear how you might want to use them.
Wasn't the entire pre-dotcom-bust era a long story of how it was very much not obvious how to use web??
And I wouldn't say smartphones did any better.
For these people, building elaborate tooling and controlling for unexpected behavior are default modes of interaction, and no interface metaphor is natural enough to disabuse them of the notion that this is how computers must be treated: adversarially, with constant suspicion, and armored with plans and backup plans and alarms to be triggered when things go awry.
This is not how computers should be treated. This is how anything we build which has any importance should be treated. With reality checks and backup plans. Half the point of double-entry accounting is about auditability! Quality control is a very large part of manufacturing!
For the executives, who are in the business of delegating and assigning work product, this sense is untroubled by complexity in implementation, and so they task those they manage with "figuring out" how to use it most effectively. For those who are forced to try and use it to produce quality work, the vision of infinite capability quickly turns into a mirage, dissolving into endlessly frustrating and inexplicable failures amid massively harmful side effects.
If only AI was the first time we have this problem… If only we had implemented any safety measures against this exact failure mode after the previous thousands of times…
Yes, completely agree. In addition to being a communication or computation medium, this stuff is made to reply or pose as a human, by saying "I" and so on, therefore the affordances of this technology are made to require interpretation and theory of mind rather than just an instruction manual.
I think Clippy knew how to refer to itself in first person when offering walkthroughs?
But regardless of whether this abuse of user psychology has been underused just before, I think it is pretty typical for attempts at intuitivity to be kind of smoother in the 50% case, and then require a bit of reverse engineering in trickier cases; as compared to mindless domain-driven presentations of «whatever, these are the things I track, these are operations you are allowed to do, put this jigsaw puzzle together however you want» where I have slight friction all the time but it is always clear what my options are and what is going on.
There are nits I could pick about how the article divides the population too neatly into opinion categories, but I liked its framing of the natural language interface problem around affordances and the expectations they set. I learned some new words, too.
Aside, the design tag's description only pinpoints visual design, but it was the closest match I found for HCI or user interface.
AI UX is awesome. I have a mostly offline search engine in the terminal. Coding agents, more than anything else, has increased my terminal usage to the point where I don’t always have my browser open now to search for things.
I understand the article is suggesting that the UX is bad because you need to be a programmer to get its full capabilities but I don’t care, this was posted on lobsters and I’m treating it like any other tech article.
Therefore, I 100% disagree with this article.
My takeaway is that you need to be both intrested and qualified (be it programming or some kind of engineering or natural sciences, it's a viewpoint issue) to get a better UX which is not the one counted in all the megaDAUs. Like you are doing.
Not a unique situation, of course.