It Is Worth It To Optimize Images For Your Site
43 points by rbr
43 points by rbr
If you have any sort of CRM where non technical users can upload photos the answer is almost always yes. users will try to upload 12MB images straight from a high-end camera and not blink an eye. Either you add automatic optimisation, or you add a size limit and have to deal with them not understanding how to reach that size limit. If they want to copy and paste an image in for example, then it's a serious workflow change to have to download/save the image, re-size/compress and then upload.
This is me, on my own site! I hate resizing stuff, I just want to drop images in a folder and have the SSG take care of it.
Even as someone who knows a good amount, I have no clue how to resize/recompress phone photos using built-in tools.
I ended up knocking up this little image resize tool which shows me the selected image at different levels of JPEG compression, I use it all the time for image on my blog: https://tools.simonwillison.net/image-resize-quality
ive tried a few tools as my fediverse server rejects large photos. I use signal and send a photo to myself which does a good job at downsizing the image to reasonable size when i redownload it. it's a little annoying but i don't post enough to figure out something more sophisticated
users will try to upload 12MB images straight from a high-end camera and not blink an eye
Hah. With the stupid resolution race having restarted users will try to upload 50+MB images.
And ironically it will still be a better experience than using the inbuild windows photo viewer which chugs with these high resolution photos, an annoyance to anyone who owns one of them like myself haha.
By the amount of compression a service applies to your images, you see how much contempt they have for you, the user. And what cheapskates they are. The worst offender is MS Teams. The images get compressed so much, you get gross blocky artifacts as if we were still living in the 90's. It's like a slap in the face. I usually put my images into a ZIP file to prevent services from destroying them with aggressive compression.
Discord, on the other hand, applies zero compression as far as I can tell. The image data is not altered in any way. Kudos to them.
Why on earth would you want to export JPEG files when the era of WebP is here? That’s an easy one to answer: because my retro hardware knows JPEG. Because I believe we should build websites that last.
You can do both.
<picture>
<source srcset="..//images/boot.webp" type="image/webp">
<img src="../images/boot.jpg" alt="First hardware boot!">
</picture>
Only if the legacy hardware understands the picture tag ;)
Won't it just ignore the picture tag, the source tag, and then render the img tag with the jpeg fallback?
Indeed, web browsers are supposed to (and I believe mostly do) treat unknown tags like tags with no semantics (like divs). We have had idioms like <canvas>Your web browser does not support canvas</canvas> for years for displaying fallback content.
I have a couple of applications where I added optimizing images to.
First as a way to generate small, medium, large versions in web optimized formats (avif or webm), which allows for img elements with progressive loading, and then as a way to apply transformations to the image (for example the ones that have exif rotate tag as generated by phone camers), and also to remove exif tags that may contain information you don't want to make public.
For me, I found that these added pieces of work like optimizing images, writing opengraph metadata, adding tags/categories, posting to twitter, etc. were adding up to make me write less overall. It's another manifestation of the optimization/perfectionism trap that many people in tech fall into.
For some blog posts that I put a lot of effort into, it's all worth it and I still do it. However, for little one-off posts that just serve to document some workaround I found for a little bug or give a quick overview of some project I did over a weekend, doing all that extra stuff just didn't make sense for the expected audience that will end up reading it.
So, I split my blog into two parts. One for the full-fledged article-style posts with optimized images with multiple formats available, detailed opengraph metadata so it looks great when posted as a link to Discord, etc. and another which is just a bare-bones Hugo with no metadata other than title. I list this separately on my website under "Notes".
If I want to add an image to a note (as I frequently do), I just upload the screenshot directly as a PNG in its native resolution. I still force myself to add detailed alt tags to those images, though.
This has been extremely successful for me. I've created several dozen of these little notes that I would never have created otherwise. Some end up ranking well on search engines organically and have become some of the most-viewed pages on my site.
Wholeheartedly agree. If you have a blog because you like tinkering with document-y websites, go nuts and add that opengraph metadata, or rework those admonitions one more time. Go ahead and try out a new font. However, if you want to document your ideas, or get better at writing, any technical improvement to the blog should be heavily weighed against just working on an article, any article.
Case in point: my blog takes about 12-18 seconds to compile, depending on what my laptop feels like (charging or not, work vs. pesonal laptop, etc.). I know I could implement some kind of caching system to make small edits almost instant. Instead, today I sat down and used the little time and energy I had to make some progress on a few articles.
Compiling my site is still "slow". It probably will be for a while. That's okay. Every time I open up my site's frontpage I'm impressed with what I've written so far, and nothing beats that feeling!
(Not that I never waste time on fun stuff. It's fun stuff for a reason. I just try to keep it to once every two to three articles.)
A simple trick I learned this week is to put the loading="lazy" attribute to an img element for simple lazy loading.
I really like Eleventy's images plugin. It takes the images cited in your markdown (![]()), creates new sizes and formats with imagemagick, and generates <picture>s with multiple <source>s in the html output.
If you build your website on a Mac, an easy way to get started optimizing your images is to download and use the GUI app ImageOptim, which is free and open source. You can drag and drop image files to the app icon and they will be losslessly compressed in place. I use it all the time to compress screenshots before uploading them.
The article explains that you can strip image metadata other than embedded ICC profiles using ImageMagick’s +profile '!icc,*' flag. That is good to learn. Unfortunately, ImageOptim cannot do this: it strips either all metadata or none.
ImageOptim has a page about why, saying stripping ICC profiles can reduce file size by over 100 KB. It recommends that you save your source images as sRGB so that without the embedded color profile, “software that doesn't support color profiles will use [the] monitor's profile, which is most likely to be sRGB as well.” But I think this reasoning is outdated: many website visitors will be viewing images on an iPhone, iPad, or MacBook laptop screen, all of which support the P3 color profile.
(ImageOptim also has a nine-year-old feature request about not stripping ICC profiles, but the developers never responded to it, so there is no extra information there. I should post that link to ImageOptim’s official reasoning there.)
I can't put my raw photos on my website, so I need to do something, and then that might as well be generate 10 sizes in 4 formats at optimal settings for this specific context.
I just converted my images to webp and called it a day
The author justifies not using webp in the article:
Why on earth would you want to export JPEG files when the era of WebP is here? That’s an easy one to answer: because my retro hardware knows JPEG. Because I believe we should build websites that last.
Does anyone have a recommendation for a tool to do this that isn't imagemagick? I've tried to avoid having it installed on any of my machines given its atrocious security track record. (Of course I'm not concerned about security problems operating on images I've produced myself, but it's safer to just keep it off my system entirely so it doesn't accidentally get pulled in and run on something untrusted)
You could consider using it via nix? Then you could nicely encapsulate it in a single script that invokes it.
I use pngquant and pngcrush to do lossless/lossy compression of PNGs. exiftool can presumably be used to remove EXIF data, though I haven't tried.