RISC-V is sloooow
51 points by raymii
51 points by raymii
I'll just go ahead and make it salty because I know the RISC-V boosters will be here any moment. Can you name me, today, a RISC-V chip I can buy and run in a workstation that's at all comparable with current aarch64 or x86_64 offerings? Don't say "it's just around the corner" - I'm tired of hearing it.
Meanwhile, I'm still using my POWER9, so it's not as if I wouldn't spend the money if there were a reasonable alternative. I would like to believe in RISC-V but so far the vast majority look like a bunch of cheap cores trying to compete with RPi.
I genuinely don't think I've seen anyone claim that RISC-V hardware with performance comparable to x86 or AArch64 is right around the corner. That would be a borderline insane claim to make at this stage.
But for something upcoming that seems to actually have acceptable performance for daily use, you probably want to keep an eye on this: https://www.reddit.com/r/RISCV/comments/1oeey5v/tenstorrent_atlantis_silicon_dev_platform/
That's the board I'm eyeing for Zig's RISC-V CI, to replace our 4x Milk-V Jupiters (which have painfully poor single-core performance).
That would be a borderline insane claim to make at this stage.
It was made to me, to my face, by a Si5 representative who had just come off stage at the Ubuntu Summit in Rīga. This is absolutely categorically the sort of thing RISC-V promoters have been claiming for years, yes.
Can you name me, today, a RISC-V chip I can buy and run in a workstation that's at all comparable with current aarch64 or x86_64 offerings?
Of course not. RISC-V is extremely new. It's less than five years since the first $100 RISC-V SBC equivalent to the first 2012 Raspberry Pi came out: specifically the AWOL Nezha in June 2021. So 9 years behind, at that point.
The Milk-V Megrez that has been shipping since January 2025 is equivalent to a mid 2019 Raspberry Pi 4 (other than not having SIMD), so that's 5 1/2 years behind.
The half dozen SpacemiT K3 boards that are expected to ship late next month (and I've already been using a pre-production one for two months — they are real) are equivalent to Pi 5 which shipped in October 2023, 2 1/2 years ago.
Don't say "it's just around the corner" - I'm tired of hearing it.
It's irrelevant whether you're tired of hearing it, it's still true, at least with respect to non-Apple Arm.
The Tenstorrent Ascalon has apparently already taped out, and they're saying a dev board will be available in Q3. Let's say Q4 :-) It's got Apple M1-equivalent µarch — done by the actual Apple M1 architect. The first dev board is only going to clock at half the speed of an M1 but that still puts it at something like Zen 2 performance.
I use both an M1 Mini and a six core Zen 2 laptop every day. Both are just fine for everyday tasks that everyone does: web browsing, playing video etc.
Huh? "It's irrelevant whether you're tired of hearing it, it's still true" ... after you give a whole bunch of currently shipping examples that are indeed behind by years, and then move the goal posts with "at least with respect to non-Apple Arm." Then you give me another example of yet another chip in tapeout that at least right now may or may not ship, and then tell me it's only going to be half the speed of M1.
This is exactly the kind of nonsense I'm referring to. And I'm typing this on an M1 MBA!
RISC-V is extremely new.
Not that new, no. I bought my Acorn Archimedes A310 2nd hand in 1989, with its 8 MHz ARM2 chip -- the second ever ARM chip, released 1986, 1 year after the ARM1.
My £800 desktop ARM computer came with a free PC emulator and it could boot DR-DOS and run full-fledged PC apps at nearly the speed of an IBM PC-XT. I ran QuickBASIC 4 on mine and it was usable for production work.
That was 2nd gen silicon of a brand new architecture, and it was that much quicker.
RISC-V -- and I call it "risk vee", because RISC-5 has precedence -- is not that impressive and it's had a whole decade.
It's irrelevant whether you're tired of hearing it, it's still true, at least with respect to non-Apple Arm.
This seems like too broad a statement to me; I quite doubt that any current or upcoming RISC-V hardware meaningfully competes with any of Ampere's products either, and those are actually well within reach of consumers price-wise and can be put in a workstation.
Can you name me, today, a RISC-V chip I can buy and run in a workstation that's at all comparable with current aarch64 or x86_64 offerings?
Thanks for saying this.
Your Clockwork Pi review remains, 4Y on, one of the best writeups of this I've seen.
Forget RISC-V .. I want a powerful ARM64 laptop, workstation or mini pc. You would think that is something solved in 2026 but it is the same story .. there is barely anything out there. (That is not a Mac)
Current generation of RISC-V chips are slow*
Can someone explain the technical reasons for why they don’t cross compile? It seems like an obvious solution to avoid the long build times, but the author of the post seems to know what they’re talking about so I wonder what the reason is.
Cross compilation of entire distributions requires such distributions to be prepated for it. Which is not a case when you use OpenEmbedded/Yocto or Buildroot to build it. But it gets complicated with distributions which are built natively.
Fedora does not have a way to cross compile packages. The only cross compiler available in repositories is bare-metal one. You can use it to build firmware (EDK2, U-Boot) or Linux kernel. But nothing more.
Then there is the other problem: testing. What is a point of successful build if it does not work on target systems? Part of each Fedora build is running testsuite (if packaged software has any). You should not run it in QEMU so each cross-build would need to connect to target system, upload build artifacts and run tests. Overcomplicated.
Native builds allows to test is distribution ready for any kind of use. I use AArch64 desktop daily for almost a year now. But it is not "4core/16GB ram SBC" but rather "server-as-a-desktop" kind (80 cores, 128 GB ram, plenty of PCI-Express lanes). And I build software on, write blog posts, watch movies etc. And can emulate other Fedora architectures to do test builds.
Hardware architecture slow today, can be fast in the future. In 2013 building Qt4 for Fedora/AArch64 took days. Now it takes 18 minutes.
It's a good question, because you're right- cross-compilation would build faster. However,
I updated blog post after reading comments from Matrix/Slack/Phoronix/HN/Lobster/etc. places.
This article has no facts that point to why RISC-V being slow means it can't be properly supported:
Without it, we can not even plan for the RISC-V 64-bit architecture to became one of official, primary architectures in Fedora Linux.
What? So they can't admin something if it's not fast enough for them? How does that make sense? Based on this, they're really just saying they suck at administering systems. If there's a real, actual reason, I don't see it here.
Also, what's with all the sentence fragments?
Slow iteration speed can kill a project. If you can't rebuild packages to retest in a reasonable time, it's more beneficial to put people on other tasks that will improve things for architectures used by 99.99% of users instead.
Every entry on this board https://abologna.gitlab.io/fedora-riscv-tracker/ needs to be built and tested, likely multiple times. Either people will be paid to context switch a lot by doing that, or free contributors will burn out. Neither sounds great.
"kill a project" is, you have to be honest, quite hyperbolic.
You know, corporate workflows aren't normal here - they're the exception, or at least they have been the exception for decades. If someone wants to make the argument that "RISC-V is sloooow" or that "slow iteration speed can kill a project", it'd be worlds more honest to qualify that with, "for corporate workflows" and/or "for corporate goals".
You, and many others, seem too willing to accept the shortcomings of corporate priorities being applied generally to all workflows. Fedora is a corporate project. Debian is becoming one. But statements without proper qualifiers are just flatly wrong.
Case in point: a modern OS with thousands of binary packages compiled for it exists for VAX. According to the "RISC-V is sloooow" and according to you, this isn't possible, and the project doing that should've died by now. Clearly it hasn't, so clearly some of the assumptions Marcin Juszkiewicz and you have made can't just be blindly applied to everything.
It's unfortunate that "Linux" no longer evokes a mental picture which includes open source communities, but of corporate priorities.
Anyone is free to build Linux distribution on their own (I maintained OpenZaurus distribution about 20 years ago). With own rules, speed, release cycles etc.
Fedora decided to have one release every 6 months. Which requires many package builds. And package is released to repository only when it builds on all architectures. So if you add RISC-V, at the current state of hardware, it will slowdown whole process. And package maintainers will complain (they did that in past when AArch64 was slow) and will start disabling RISC-V architecture from their packages.
a modern OS with thousands of binary packages compiled for it exists for VAX.
It could be cross compiled from a modern system too. I do that with my retro-computing---I compile/assemble on a modern system and then copy the results to the hardware/emulator.
How much does an 80 core x86 cost? Looks to me that the CPU chip alone costs $10k+.
You can buy four complete $2500 RISC-V Milk-V Pioneers with 64 core CPU and 128 GB RAM each for that (or could two years ago). Each one of them will run RISC-V code faster than an 80 core x86 running QEMU -- and with a lot less power consumption too.
Or quad core Milk-V Megrez with 32 GB RAM was around $250 (it's been replaced by Titan now). You could buy 40 of those for the price of just that 80 core x86 chip, and have a wicked build farm.
The parity RISC-V core costs Infinity dollars because it doesn't exist yet.
The architecture is still a humongous step forward for all computing that isn't performance bound - anything that just keeps quietly plugging away at a slow clock.
That 80 core, aarch64 cpu costed me 300 EUR.
More info about my desktop system: https://marcin.juszkiewicz.com.pl/2025/06/27/bought-myself-an-ampere-altra-system/
Was going to comment that you bought it from a friend... but looking on eBay, you can find it for the same or cheaper used (this is the Ampere Altra Q80-30). You've tempted me to build my own system around it!
When you're doing fixing work like that, you typically want to do it locally first. Full rebuild plus waiting for a build queue in CI is a big time sink when you could try an incremental change locally instead.
Fixing locally first also ensures you're not clogging the build queue for other people with long failing tasks.
When you have tens/hundreds of system to run big project like Fedora you want proper servers. The boring ones, which you rack, cable and use. With BMC, high speed networking etc.
You do not want a box with SBCs inside, requiring manual labour when system requires reboot or reinstall.
RISC-V is at SBC stage now.
If x86 is anything to go by, it's probably REALLY hard to make a microarch that's Genuinely difficult to make fast implementations of it. On the other hand RISC-V has a fair number of "wait why the heck did you do that? that's going to make Thing X way harder for compilers to handle efficiently!" moments, but there are few enough that they can probably be fixed with a relatively unintrusive extension.
I think you mean “instruction set” not “microarch”: a microarchitecture describes an implementation of an instruction set.
The classic way to make an instruction set slow is full CISC: all arguments are generalized to allow complex addressing modes with features like indirection and autoincrement, so the number of memory accesses per instruction can be huge. That’s the main technical disadvantage of 68k and VAX that led to them being displaced by RISC. By contrast x86 typically has one memory operand per instruction and its address is calculated entirely from registers; it’s not a very CISCy CISC.
Another way to make a slow instruction set is a stack machine, because that serializes all instructions through a single write bottleneck: the top of stack. The transputer t9000 tried to mitigate this by using idiom recognition to decode stack operations into a RISC-style internal form; but the transputer was not a pure stack machine, its stack had a limited depth and it had the notion of a workspace which the t9 turned into a conventional register file. The t9 project failed and I don’t know of any attempts to make a superscalar stack machine since then.
Does Fedora have a policy that all packages must be compiled natively? This seems like the perfect use case for cross-compilation.
For context:
So from that we can conclude that going from "clean-slate instruction set" to "workstation-grade CPU ready for the mass market" takes about a decade -- if you have as much money and motivation as Apple and ARM put together.
Less bad than I expected but I was probably pessimistic:
binutils build is about 4x slower than ARM64 on Fedora's build hardware. And ARM64 is 1.5x slower than AMD64.
The RISC-V builder has also less RAM, maybe another constraining factor, which might also be because of hardware availability.
That is obviously not great but I am happy for the diversity and openness and might already be enough for many tasks.
Would be interested how it feels for browsing the web etc.