micro-benchmarks don’t tell the whole story
7 points by amirouche
7 points by amirouche
Content is interesting, but the LLM writing is so incredibly grating.
Where is the benchmark code? I’d like to see how each implementation was configured and run.
You want this bench.sh?
My kernel is based on 6.17.0
Summary:
GOMAXPROCS=1
To get you started you do something like:
git clone https://github.com/amirouche/letloop/
cd letloop
./venv
make chezscheme letloop
cp a.out local/bin/letloop
cd benchmark
make help
make bench
The app is clicker game (flask code), and only GET / is called by the wrk, it is a microbenchmark, and it meant to exercises async / await. I was / am looking for more advanced benchmark ala microservices where coordination between flow of execution is necessary, but could not find something interesting or put something together. There is no online service that will accept 100k+ RPS per second so that requires a more elaborate backend with caching. It can be made even more interesting when multiple flow of execution are started for a single request, example:
I pushed the experiment until this point, and even a lil bit too far, to have an idea of where can stand colorless / transparent async, and to survey the broader ecosystem, more code is necessary to make the previous benchmark scenario.
I am surprised by Java Loom results, maybe there is something wrong. Let me know if anything looks misconfigured.