I don't care that it's X times faster

59 points by zmitchell


cceckman

Regarding

Your benchmark isn't measuring what you think it's measuring

  • Charitable interpretation: you're accidentally measuring something that's been optimized away entirely

A good paper from The Literature: Producing wrong data without doing anything obviously wrong, how many CS papers are within the variance you can get from, say, changing the aggregate size of your environment variables.

jrwren

If faster means uses less memory and less CPU cycles and the thing is deployed widely at scale, you are literally saving the planet by using less electricity.

I do care that it's X times faster.

tonymet

I produced a lighter alternative distribution of google-cloud-cli https://github.com/tonymet/gcloud-lite using the approach the author laments. It's 85% less resource intensive to deploy. That means you can use it on micro instances that the official CLI would hang (due to vcpu /iops budgets and resource constraints)

Google recognized this and the user complaints, and made some efforts to trim their official packages:

https://issuetracker.google.com/issues/324114897?pli=1

So benchmarking and competition is a good thing. Assuming the tool is correct, it's important that it's also efficient and elegant . Often developers are so focused on feature work they forget to constrain resource utilization. A little bit of attention can make a load of difference.

Sure some benchmarks can be biased, nobody is perfect, but in general we should encouraged people to reduce resources as much as possible.

jmillikin

The author doesn't link to which post is bothering them, but based on timing I'd guess it's this one?

[r/rust] I rewrote tmignore in Rust — 667 paths in 2.5s instead of 14 min

Maybe it's not the traditional sort of "I optimized an AV1 encoder's inner loop by 5% with clever SIMD" optimization post, but it still seems interesting to see someone investigate and solve a performance problem by disassembling a propriety tool.