Our interfaces have lost their senses
47 points by toastal
47 points by toastal
Why are most web UIs silent?
I don’t know, but god help me if websites start making noises all over the place. I should be able to use my computer to play music or communicate with people without random websites being able to spritz noise pollution into my life.
Navigating this page was not silent or lacking tactility to me.
The mechanical keyboard offered touch feedback, both tactile and aural.
I made annoyed noises at the genAI slop.
It did not however smell or taste anything, but I don’t think I want that from a website.
Reduced to text under glass screens.
Maybe that’s part of their problem.
I had a website that played MIDI music on the background, had some other sounds and animated GIFs with flames. It was quite long time ago.
I prefer web UIs to be silent because I spend a decent amount of time riding public transit and usually don’t wear headphones while doing so. My phone is permanently on “silent mode” specifically for this reason. I just want to read articles like I’d read a newspaper, without bothering anyone else.
The main difference between HTML5 and WebAudio is that the latter respects silent mode, e.g. I’ve added some simple sfx to potato.horse and lines.potato.horse → these work only if your phone isn’t muted. (the sounds themselves are just clicks)
I feel like one of the reasons is that the audio APIs have unusable inputs for volume. Most of them allow specifying a percentage (which is actually a percentage of a percentage) which has very little meaning when pushed through the hardware. If the APIs allowed specifying an absolute measurement and then my hardware would try to replicate that, subject to any additional constraints I place on it, then there’s some websites I would like to have sound on. But even for those websites, if the volume is too high or too low, the need to adjust it after being annoyed takes the joy away.
Desktop applications generally don’t have access to absolute measurements either, do they? I wrote a toy music player a while ago and I was at the mercy of the system volume as well as the per-application override.
99.9% agree with you on this one but there are some valid use cases for that, say with smaller audio effects like on mmm.page (also completely optional).
When reading this page with a screen reader, I noticed that some of the paragraphs, starting with the “Then came terminals” paragraph, are repeated several times. Looking at the HTML, I see that the text is repeated several times in spans with different styling. What effect was the author trying to create here?
I admit that the author’s seeming disregard or ignorance of proper semantic markup and accessibility makes me less receptive to the article’s content, since the article is itself about UI.
It looks like the other versions all have differing levels of blur applied to make a drop-shadow effect; very subtle, presumably to make the text easier to read over the section’s background image, and nothing that couldn’t be done with CSS alone.
On Firefox it also failed to render as a drop shadow and instead pushed a large chunk of text behind some of the images. Forgive me if I don’t take UI advice from an author that apparently doesn’t test their own UI offerings very thoroughly ;)
Although I did not notice the specific points you and others are raising here, I did object to the AI slop and think “the UI of this article complaining about UI is really bad.”
This bugs me too, but ‘everyone’ acts like HTML & web semantics are both trivial & not worth the effort. Rarely is anyone paid to care—quite the opposite where I have been told to care less since screenreaders are a minority. 😔
The author complains about computer interfaces that lack tactility and turn the multisensory, creative, nuanced act of painting into the dull drudgery of typing.
Given a chance to live by this professed ethos, however- demonstrating craft and working with her hands to illustrate her ideas- the author instead decides to encrust the essay in slop. These illustrations- if we are to be extremely generous- are not the result of being deeply interested in or aware of the mediums of yarn or feltwork; of the tactility of these materials, or the way they shape an artist’s expression.
The aesthetic is just a thin coat of paint over some hastily typed words, digested and reconstituted through a compressed database of stolen artwork. It conveys the briefest impression of warmth and depth by wearing the skin of tactile media, but there’s nothing underneath. It’s a very explicit choice not to care about any of the things the author claims have been lost.
The thematic disconnect is nauseating.
I read your comment before I looked at the piece. IDK, yes, there is totally a 6 fingered illustration, but they at least had a consistent theme and feel from image to image. It’s better than just “I made a blog post about Postgres so let’s put ‘database speed elephant’ into a slop generator and put that into the top of the post.” Those really make me upset. This post was probably infeasible as time commitment for an individual without a financial motivation unless they used AI to do the art (and based on the sloppiness of the HTML, they probably had AI to write the HTML too). Making little felt miniatures like that would take someone who already knows how to do it days for each one. The single blog post would probably be a six month or year long project to plan, make, photograph, and then produce. I would be happy to see something like that, but I don’t blame people for not being able to make that kind of commitment.
I’ve started making videos lately and a thought that occurs to me is that professional video has a huge blow up between the time of the video and the time to make it. A 90 minute movie could take a year of wall clock time and involve several hundred people working for 40 hours a week! But then the time gets captured back on the other end, when millions of people watch those 90 minutes.
AI slop definitely changes the balances of the time equations, but I’m not totally willing to say that a lower effort post has no value compared to the hypothetical where they spent a year felting. Not everything can be high effort. And sometimes you need to do a low effort thing first to even know if it’s worth doing a high effort thing.
If she had made the illustrations for this post by hand, it would look different. Both in the details- because the slop here is not by any means a realistic depiction of what felted miniature dioramas would look like- and in the broad strokes.
If she felt very strongly about this specific presentation of her ideas, perhaps she would’ve spent a very long time crafting the illustrations, and at the same time focusing and refining her writing. Perhaps this time and effort would’ve resulted in a more effective and actionable thesis.
Perhaps she would’ve scaled her aesthetic ambitions down to something that fit in the time and effort she was willing to spend- fewer, simpler illustrations, chosen more carefully, using a medium she found more familiar. This, too, would’ve been an opportunity to reflect on tactile media and the time and effort that must be spent thoughtfully to complete a project.
She did not make any of these choices, but instead chose a rapid-prototyping approach drenched in hollow visual grandiosity that is in complete thematic opposition to her stated values and actively avoids meditation upon the implications of what she claims to want.
It’s not merely about putting in less effort, it’s that the choice of approach undermines the message.
That feels too harsh to me, but I agree that the message and the method mismatch. If you want a tactile world, etc., you probably don’t want a world where everything is made by just typing “make the thing” into a chatbox.
woof. i came here to find this comment, and here it was.
i am wondering if this is the future. because I admit the AI art is beautiful. I looked at all of the art lovingly and then not any of the words because is this really the future?
Scrolling through this was weird, with my browser jumping around randomly. Is that the new UI experience? It’s annoying.
I picture MySpace, where you would be assaulted by sound and imagery that a poor designer came up with. I think there is a reason we don’t do this, and it’s that we can filter textual information fairly well, but other senses are tougher to ignore.
I happen to miss MySpace & GeoCities. Not for everything would that make sense but there should still be room for the ‘weird’ & less seriousness to a lot of the web.
(Copy-pasting my HN response to this article)
In The Great Flattening section of the post the author literally argues that the way we interacted with computers back in the 50s-70s was better because it was more of a full-body experience. That’s a silly argument to make. As far as the status quo HCI paradigm goes, we’ve obviously made a lot of progress over the last 50 years.
However, I think the post is striking a chord because it’s pointing to a deeper truth: after 70 years, we are still only scratching the surface of all the ways that humans and computers can potentially interact with each other.
Also while reading HN comments I discovered this great post, A Brief Rant On The Future Of Interaction Devices. A must-read critique of touchscreens, IMO. I’m now binging on Victor stuff and HCI design more broadly. If you’ve got any must-read/watch/listen resources please send them my way
As I assume many other lobsters did, I ctrl/cmd+F’d the post and sighed when I saw no mention of Bret Victor. He’s not the end-all-be-all, but I feel like an acquaintance with his ideas is a fair prerequisite for pontificating about the world of UIs beyond everything converging on being a flat, inert glass rectangle. So, +1 for bringing him up in the comments.
I hesistate to categorize these as musts, but I’ll share a few references that come to my mind. Molly Mielke’s Computers and Creativity strikes me as what the post wishes it was. Chris Granger’s Coding is not the new literacy is excellent, and I was happy to see his work show up on lobsters again a little while ago. Seymour Peypert’s Mindstorms is a classic. Although I don’t want to endorse it wholesale, Ted Nelson’s Computer Lib/Dream Machines imagines a world where our relationship with computers might be qualitatively different. Some other thinkers and writers in the space would be Alan Kay, Jef Raskin, and Ben Shneiderman.
Thanks for the recommendations
Fun fact, 6 months ago I splurged a little and bought a first edition Computer Lib / Dream Machines. Also did some research directly inspired by intertwingulariry: https://technicalwriting.dev/links/intertwingularity.html
Direct link to Computers And Creativity: https://www.molly.info/cc
Good read so far and it points to lots of other resources that are definitely up my alley
Ah yes, thank you for the corrected link. Also, cool stuff with the intertwingularity, thank you for sharing!
As far as the status quo HCI paradigm goes, we’ve obviously made a lot of progress over the last 50 years.
I realise that default bias plays a role in my thinking here, but I feel like we made a lot of progress over the first 20-25 of those years, and then spent the balance making things worse again. Computer UIs now are a confused mix of simulacra orphaned from their desktop metaphor antecedents, screenshot-optimised flat layouts with zero affordances ever, and functionality hidden behind random icons wherever it’ll fit in the name of decluttering.
As for Bret Victor’s post… I have my doubts. Manipulating things by touch isn’t very abstract. Manipulating symbols isn’t very tactile. It sounds a bit quippy to ask “what does a monad feel like?” but if you think screens are inadequate for interacting with them the question should at least in principle be answerable.
I think this was by far the worst website I’ve interacted with today. It was jumping all around, half the text was white on white or bright images and some parts of the page produced a lot of lag. The points the post tries to make are all things that modern UI design already considers and is used in plenty of places (Apple devices!)
UIs shouldn’t try to flood the senses tbh, they should focus on the important senses they need to engage in, not the ones they can. Especially as someone on the Autism-ADHD spectrum, flooding my senses is not pleasurable.
This article starts out with a pretty radical tone, but its actual concrete proposals are lukewarm at best. Take the computer communication proposals:
Bar the upsetting sound suggestion, all this stuff is really basic design that everyone does already? Not going to lie - the anodyne content coupled with flamboyant style makes me suspect that a lot of the article’s content, as well as its illustrations, is AI slop.
The article is supposedly about good design but it messes with scrolling.
Can we use subtle chimes or sonification to highlight patterns?
Can we not
The article is supposedly about good design but it messes with scrolling.
Huh. I am usually the first to flame sites that break scroll, but this one seems fine? The velocity curves of elements are the same as the browser’s native. So for me it feels fine.
Can we not
I like “kinetic” UIs when that it’s opt-in. When done right it feels satisfying to perform whatever tasks. And some game developers know this, and exploit it. Good 4X / turn game needs polished sound cues too.
While everyone’s making fun of UIs that use sound, most of us use them daily without noticing them.
Game menus use sound and haptics for every interaction. Every button press gets an ever so slight rumble and a satisfying click sound, turning every interaction into a true multi-sensual experience.
Not like Windows XP, where errors and notifications caused a disconcerting “pling”.
Sound is probably out of place in something like excel, but why can’t I feel and hear the brush strokes on paper in drawing apps, accurately representing speed and pressure?
Game menus use sound and haptics for every interaction. Every button press gets an ever so slight rumble and a satisfying click sound, turning every interaction into a true multi-sensual experience.
Being a multi-sensory experience isn’t a means to an end for a game, it’s just what they’re trying to be. This has costs; you probably don’t have a dedicated spreadsheet controller, for example. Even if it didn’t it’s not obvious that I’d want… anything I use a computer for, actually… to be an immersive experience. Mostly because it’s a chore I want to distract myself from, but even for fun stuff I feel like the fun is happening in my head and wouldn’t be enhanced by a sound effect every time I typed a command, or whatever. (I’m not ignoring your specific example, but I don’t draw, so I’m not sure I could say anything intelligent about it. Intuitively I feel like that feature might easily cost more than the entire rest of the app).
As a side note, I absolutely do notice them. I don’t play a lot of games, but for the few I have played recently I can remember the sound of their menus.
because games are not like other UIs at all, games and movies are the only UIs I can think of which I want to be immersive
Followup thoughts:
Some games even extend this to their music. While Rift - Planes of Telara used many different layers of soundtrack in the actual game world to adjust the sound depending on the number and type of enemies, Wii Sports Resort originally invented this technique for their menus.
The same song, in multiple different versions, fading smoothly depending on which menu you were in, to give a unique feeling to each menu while making everything feel coherent nonetheless.
Wow! I really want to know how the images were made …
I do wonder about AI, but at least when I tried the AI generators more than a year ago, they couldn’t produce anything like this. They produced stuff with a very distinct style, more like this - https://www.youtube.com/@protocoltownhall/videos - everything there is clearly AI
Many of the images look photo-realistic, but then I feel like this page would have taken like 2 years to make, which seems implausible (?)
Oh well now I see she is working on AI, so I guess it must be AI … that was not my first thought, it seemed almost like the opposite aesthetic - https://wattenberger.com/
Or maybe it is some advanced Photoshop plugin that does style transfer of the “Yarn”, but many of those plugins work with AI techniques
(Also, the plea for interfaces that use our senses of course reminds me of https://dynamicland.org/)
i was very impressed with this page when i first saw it, but i refrained from sharing it here, as the author admits to using AI to generate these images, on bluesky: https://bsky.app/profile/patrickpietens.dev/post/3lk4qz7gg7k2n (on the first reply there)
it becomes more obvious towards the end (upon observing the closeups of each characters’ faces), that there is no similarity from one image to the next (the yarn pattern is of different sizes, the backgrounds are blurry and meaningless etc.)
Well thats dissapointing, but not surprising: Nobody would put effort like that for post like this, especially in tech field.
At least I can now just ignore the post, since there wasn’t any effort to begin with.
With respect, I would disagree on the effort part. Yes, they used AI to generate a bunch of images. Throughout history we have used tools to make us more efficient. It seems to me that this is not all that different from that. They had an idea and wanted an article on it, but might not be that much of an artist OR didn’t want to spend the tens or hundreds of hours it would take to make this.
That is just my take.
I’ve not managed to get these sorts of results myself (nor have I really tried, in fairness), but there was an AI art Turing test that did the rounds a few months back that showed some pretty stunning examples of AI-based art that doesn’t just look like standard art.
I have no idea what the actual process looks like for generating these images, though, and can claim absolutely no expertise in this matter.
EDIT: And just like you, the main thing that made me realise the images were AI generated was the amount of time it would have taken to create them if they were real, although after that point it became easier to notice things that were a little bit off, like odd features or weird blurring. Although maybe those would have also been present in real photos, and I was just primed to notice them because I knew the images were AI?
Interesting article … I think this page looks a little more surprising than those examples, probably because it’s an experienced designer + AI, not “one shot” image generation.
Although I don’t know exactly how all those were generated … but definitely a few of them toward the end have the giveaway “AI style”
And yeah I am judging it based on style, and I know AI can generate multiple styles … but yeah I think this is a good example of augmentation/acceleration, just like you’d see with coding. A random person is not going to get as good results from the LLMs as an experienced programmer!
I agree that digital lives have become far too monotonous, but I don’t think that’s an issue with how computers work, and fundamentally are. I just wish I spent less time on them. I look at a screen all day at work, and when I go home I look at a screen for fun. I should just change that and touch grass instead of faulting the screens for not having grass for me to touch.
“Instead of interactive controls, we have a text input.”
This strikes me as my DOS days.
I also want a yarn based interface now.