Dropping RISC-V support
36 points by fratti
36 points by fratti
I was not aware of this distro! It looks quite appealing. But, in case anyone else was confused like me, it has nothing to do with the Arch-based https://chimeraos.org
It is frustrating how slow RISC-V hardware has been to attain performance parity with commercial incumbent architectures. Given the economics of chip-making, I don’t really expect the lack of a reasonably performant build server candidate board to change any time soon. Rather, I expect RISC-V to continue nibbling away at the “embedded” low end.
It’s been hit by
COVID
Intel going slowly bankrupt leading to what would have been a very interesting chip two years ago getting cancelled and the same cores just now hitting the market in an SoC from a Chinese company instead, quite probably using not as good DDR etc IP and an older process node.
a very interesting chip that was probably going to be out about now, and probably leapfrogging the RK3588 and Pi 5 chip (whatever that’s called), getting cancelled due to US sanctions.
Android development is gated by the RVA23 spec, which was just published in December, so actual chips will be a year or three away. The V spec and some others were finished two years later than had been hoped for.
most of the Western big money for higher performance seems to be going into the very profitable automotive and aerospace markets, with no one caring too much about the market for a few thousand or even hundreds of thousands of SBCs for hobbyists.
there is no point even going for the many millions of units of consumer phones / tablets / PCs until they can complete with recent generation x86 and Apple, which has been consistently looking for a few years now to be around 2027.
Is 2027 or 2028 “any time soon” for you? It’s frighteningly close for the people who are actually working on the hardware and software needed!
most of the Western big money for higher performance seems to be going into the very profitable automotive and aerospace markets, with no one caring too much about the market for a few thousand or even hundreds of thousands of SBCs for hobbyists.
That market is never specifically catered to by SoC manufacturers. You are getting chips intended for Android TV boxes or industrial automation, Automotive investment right now is at rock bottom, so that’s not siphoning away any RISC-V money.
there is no point even going for the many millions of units of consumer phones / tablets / PCs until they can complete with recent generation x86 and Apple
It is strange to me that you think RISC-V phones and PCs would ever be interesting to a business if performance was good? RISC-V as an ecosystem currently has zero advantages. Even if you make your own SoC you’re going to be licensing a core design from someone if you want anything that’ll be able to compete with Arm’s portfolio of freebies they just give you at the base tier. Anyone large enough to benefit from a patent-free license-free ISA to develop their own high-performance cores from is so large they can just negotiate better deals with Arm.
There’s simply no business case for Cortex-A-tier RISC-V, not even Qualcomm could be bothered to invest in licensing some SiFive cores or making their own of that performance tier after having a lawsuit with Arm over ISA license fees.
Is 2027 or 2028 “any time soon” for you? It’s frighteningly close for the people who are actually working on the hardware and software needed!
Are you saying Chimera-Linux should simply keep doing RISC-V builds to anticipate hardware that doesn’t exist, just in case any happens to pop up out of nowhere one day? What’s their benefit in doing that?
…that and the licensing costs at the Cortex-A level aren’t as significant as things like software ecosystem. RISC-V’s advantages make a lot of sense at MCU level, where deep customization and royalty free are huge benefits. When you’re buying A53s to run Android or X1s to run Windows, less so.
most of the Western big money for higher performance seems to be going into the very profitable automotive and aerospace markets, with no one caring too much about the market for a few thousand or even hundreds of thousands of SBCs for hobbyists.
That market is never specifically catered to by SoC manufacturers. You are getting chips intended for Android TV boxes or industrial automation
That was certainly the case with the original Raspberry Pi (a warehouse full of unsold set-top box SoCs) and Odroid (Galaxy S5 sold worse than expected, resulting in a lot of spare Exynos 5422 SoCs, which Hardkernel used for Odroid XU3 and XU4) but in recent years Broadcom seem to have been developing SoCs specifically for Raspberry Pi.
RISC-V has not as yet been big in set top boxes or mobile phones to have surplus chips or amortisation of costs there, but you can observe that all the Chinese RISC-V SoC designs (and they all are Chinese, even with SiFive cores) have multiple camera inputs and NPUs, to me suggesting quite strongly that millions and millions of them are being used by the Chinese state.
It is strange to me that you think RISC-V phones and PCs would ever be interesting to a business if performance was good? RISC-V as an ecosystem currently has zero advantages. Even if you make your own SoC you’re going to be licensing a core design from someone if you want anything that’ll be able to compete with Arm’s portfolio of freebies they just give you at the base tier. Anyone large enough to benefit from a patent-free license-free ISA to develop their own high-performance cores from is so large they can just negotiate better deals with Arm.
How can you negotiate a better deal with Arm if you have no realistic alternative if/when they simply say “The price is the price, and it’s going up 20% in five minutes”?
Arm’s prices, the stuff they throw in for free, their newfound willingness to allow custom instructions on certain cores …. these have all happened since RISC-V started to appear.
RISC-V existing benefits Arm customers hugely – until Arm decides their remaining customers are so locked-in that they won’t leave no matter what the prices are.
There are in fact customers big enough (one assumes) to negotiate better deals with Arm who are moving to RISC-V for applications processors. Samsung and LG are two obvious pretty large examples, for their TVs.
There’s simply no business case for Cortex-A-tier RISC-V, not even Qualcomm could be bothered to invest in licensing some SiFive cores or making their own of that performance tier after having a lawsuit with Arm over ISA license fees.
Qualcomm have pretty obviously been developing their own high performance RISC-V cores based on Nuvia, or they wouldn’t be so active in RISC-V mailing lists and trying to persuade people to modify RISC-V to use fixed-width 4 byte instructions only.
Is 2027 or 2028 “any time soon” for you? It’s frighteningly close for the people who are actually working on the hardware and software needed!
Are you saying Chimera-Linux should simply keep doing RISC-V builds to anticipate hardware that doesn’t exist, just in case any happens to pop up out of nowhere one day? What’s their benefit in doing that?
I wouldn’t presume to tell them what they “should” do. They can do what they want. However their stated rationalisations don’t line up with the current facts. In particular, RISC-V native building has been better price-performance (and energy-performance) than running an emulator on x86 ever since the VisionFive 2 came out in February 2023 at 10% of the price of the slightly worse performance HiFive Unmatched that was released 21 months earlier.
You still get higher per-core performance with emulation on x86, but 1.5 - 2 GHz dual-issue or mild OoO RISC-V cores cost much less than 4+ GHz x86 cores. The task of building thousands of Linux packages is embarrassingly parallel – you can throw pretty much as many cores at it as you want if they are cheap enough.
My own tests show a Linux kernel build being only three times quicker on a $1500 24 core i9 machine (with docker/qemu) than on a $100 4 core VisionFive 2. You’re clearly better off getting three VisionFive 2s. That wasn’t the case 18 months earlier (mid 2021) when you would need three $650 HiFive Unmatched instead.
Price matters, not only performance.
Looking historically, mainframes, minicomputers, and microcomputers all went through a phase where performance of the base model stayed the same for many years, but cost went down.
look, i really don’t care how many subpar SBCs i can buy at a price - if a $2k board of sufficient qualities was available, i would have gone for it and wouldn’t have said a word
spreading the load onto many slow boards is just a general pain in the ass:
for any other arch i am interested in, i can have a machine that doesn’t subject me to any of that; for riscv i can’t, and it does not matter to me how much it costs, so there is that, and only that
C’mon it’s not all that hard. If you wanted to then you would. Ubuntu and Fedora build on farms of cheap boards. It really doesn’t matter if the few very large projects tie up a board for a long time, as long as the board does have enough RAM, which seems to be 16 GB at the moment.
Fedora uses Koji to manage the build farm, Ubuntu uses Launchpad. Both are open source.
I’m sorry to hear that there are reliability problems with the Pioneer – which I can’t comment on because I don’t have one and haven’t been following the experiences of those who do. I’ve got pretty much every other RISC-V board that exists (or at least one for each SoC … there are often a few different boards with the same SoC), but the Pioneer is out of my impulse buying range. I’ve made a couple of offers on used ones, but so far have been outbid by people who want it more than I do.
If the Pioneer is reliable then it’s ideal with 64 cores and 128 GB RAM. The price is absolutely fine for what it is. Yes, less performance than an x86 for the same price, but a lot more performance than an x86 the same price running RISC-V in emulation.
Please try to be less condescending/demanding, it’s not a good look. You are lecturing someone about her hobby project, “not wanting to” deal with something is a completely fine way to decide what to do actually.
Is it a hobby? I have no idea. I’ve never heard of this distro, but then there are dozens I’ve never heard of.
“not wanting to” is a perfectly fine reason by me. Nothing more is needed. I just don’t like then using as justifications things that are incorrect, or out of date, or just artificial arbitrary constrains such as “it has to all be on one machine”.
Don’t want to. Have other priorities. Waiting for hardware that normal users will want to daily drive. All perfectly good reasons by me.
It’s not artificial constrain, both Koji and Launchpad are massive projects hosted by either corporation or a team of people with large backing. It’s unreasonable to expect someone to run a big infrastructure and a server farm to build software because the hardware is terrible and can’t be used as a single host.
They are free open source software that anyone can download and use.
A dozen VisionFive 2’s would cost $1200 and fit in the same volume as a standard PC tower.
Or, Sipeed has been selling the Lichee cluster, with 7 LPi4As plugged into a small motherboard, for around $1250 for boards with 16 GB RAM and 128 GB eMMC per board (so a total of 28 1.85 GHz OoO cores, 112 GB RAM), or in a case with power supply and fan for $1350. That’s more expensive, but pre-made, Just install software and go.
Just install software and go.
Exactly.
It doesn’t even have to be all that hard. In the old days before multi-core computers we all used distcc to run the build scripting stuff on one machine and the compiles on as many others as you could find on the network. Takes minutes to set up.
You could even use your big x86 and Arm machines to run RISC-V cross compiles.
If they simply don’t WANT to do RISC-V, that’s fine.
But if they say they WANT to but X, Y, and Z are problems preventing it …. I’m gonna offer solutions to X, Y and Z. That’s what an engineer does.
I recommend you read the threads here, as it has been explained why what you are proposing (cross-compilation) is not fine and will not work. What you have offered is not solutions, setting up a compile server farm is not a solution. You seem to not grasp that it’s not “just install software and go” because someone has to install it, configure it, maintain it and it’s not just software. Adding a powerful host or 2 to a distro build infra is much different than “go buy dozen of risc-v machines and make a build farm cluster out of it”.
And please do not bring up “the old days”, those were terrible, lots of things were done wrong and software/hardware landscape is much different these days.
Don’t worry, I perfectly understand the difficulties of cross-compilation. I do that kind of thing (and avoid it) every day.
There are a lot of things that need to be run in the target environment, including scripts, and also binaries that are built as part of the build process. And in some builds some of those binaries also need to be built for and run in the host environment.
I’ve been doing this stuff for decades.
Running scripts in the target environment (whether actual hardware or emulation), preprocessing source, and sending the preprocessed source to another machine – whether the same architecture or a cross-compiler on a faster machine – is a perfectly viable and low-configuration way to do things.
5 years ago, the claim from you and other RISC-V fans was that higher performance (by which I mean, faster than any raspberry pi but still slower than AMD and Intel chips) designs were just a few years away. 5 years later, they’re still a few years away? Was that just that one chip that got cancelled due to sanctions?
most of the Western big money for higher performance seems to be going into the very profitable automotive and aerospace markets, with no one caring too much about the market for a few thousand or even hundreds of thousands of SBCs for hobbyists.
What are these designs? With how expensive many RISC-V SBCs are, if expensive high performance automotive or aerospace chips exist, I’m surprised no one has made an SBC around one of them.
There’s no way I would have said something like that five years ago, when the initial spec was just barely ratified and there were three microcontrollers (FE310, GD32VF103, K210) and nothing Linux-capable at all on the market (the 2018 limited run $999 HiFive Unleashed had been and gone … and to make a usable SBC from it required a custom $2000 FPGA board from Microchip or a standard $3500 FPGA board from Digilent, so it had a very limited potential market even if they’d made more than 500 of them.
When the P670 was announced in November 2022 – two years and four months ago – I would quite likely have predicted it being around three years for someone to license it for an SoC and get that made and on a board you could buy. At that time it would certainly beat a Pi 4, and in fact we now know it would be very competitive and probably better than Pi 5.
There of course is never any guarantee that some one will license any given core, or use it to made an SoC suitable for SBCs. Where, for example, are the SBCs using the Arm A57, A73 and A75?
Even the Arm world struggles to have more than one or maybe two (if any at all) usable SBC SoCs for a given core.
So, yes, the SG2380 was announced using the P670, and it has been caught up in politics, not technical issues. It probably could realistically have been out right about now, but even if it was late 2025 it would not be late by normal core -> SoC -> SBC schedules. A53, A72, and A76 all took around four years from announcement of the core to the Pi 3, Pi 4, Pi 5.
I’d be pleasantly surprised to see a commercially viable RV board with performance comparable to RK3588 or the Pi 5’s BCM2712 (clocked at 2.4 GHz) by then. But I’m in no rush, myself. Think how long it took ARM to challenge the Intel / AMD x86 duopoly in the desktop / server space.
Linux is itself a legacy system, and to me there’s something a little sad about it colonizing a nice greenfield architecture, the first truly open hardware the world has seen outside academia. If RV builds strength in the low-end and non-Western markets, maybe it will drive some OS innovation too.
RISC-V is like quantum…always just a few years away
I think this is completely untrue. RISC-V is real, exists, works, there are hardware products being built, embedded into mass produced devices, etc.
It’s just in the space that most of us are mostly interested in - modern high performance CPUs - the instruction set is maybe 1% of the story. Modern CPUs are one of the most elaborated artifacts of human engineering and result of decades of research and improvements, part of a huge industrial and economical base built around it. That’s not something that can be significantly altered rapidly.
Just look how long it took Arm to get into high performance desktop CPUs. And there was big and important business behind it, with strategy and everything.
They’re not asking for high-performance desktop CPUs here though. Judging by the following statement on IRC:
<q66> there isn't one that is as fast as rpi5
<q66> it'd be enough if it was
it sounds like anything that even approaches a current day midrange phone SoC would be enough. RPi5 is 4x Cortex-A76, which is about a midrange phone SoC in 2021.
Last I checked, the most commonly recommended RISC-V boards were slower than a Raspberry Pi 3, and the latest and greatest and most expensive boards were somewhere in the middle between RPi 3 and 4. So yeah, pretty slow.
That’s very misleading. That’s what Geekbench says, but GB isn’t relevant to software package build farms. For building software most common RISC-V SBCs at the moment are far closer to Pi 4 than to Pi 3, ESPECIALLY given that Pi 3 never has more than 1 GB RAM (and Pi4 has never had more than 8 GB) while 8 GB, 16 GB, 32 GB RAM RISC-V SBCs are everywhere and you can get the 128 GB RAM 64 core Milk-V Pioneer for a lot less money than the Ampere Altra they talk about in that post.
Milk-V Pioneer
You are obviously way more knowledgeable about RISC-V than I, a mere dabbler. But… I went to https://milkv.io/pioneer and clicked through their three “buy” links.
The OP considered it:
Milk-V Pioneer is a board with 64 out-of-order cores; it is the only of its kind, with the cores being supposedly similar to something like ARM Cortex-A72. This would be enough in theory, however these boards are hard to get here (especially with Sophgon having some trouble, new US sanctions, and Mouser pulling all the Milk-V products) and from the information that is available to me, it is rather unstable, receives very little support, and is ridden with various hardware problems.
These things go in and out of stock. In the Arm world too – I want an Orion O6 but missed the first batch and they are currently out of stock on Arace with, supposedly, another batch coming at the end of March.
I can’t comment on reported problems with the Pioneer as I don’t have one. Sophgo have a revised chip coming that fixes the bugs found in THead’s mid 2019 C910 core, raises the clock speed to 2.8 GHz (there are SG2044 results on the Geekbench site at that speed), and updates the RVV 0.7.1 (XTheadVector) to RVV 1.0. There are rumours that it might even be a lot cheaper (i.e. mass produced) than the SG2042.
We will see how good a job they’ve done this time.
I know that people who are not involved in RISC-V have an impression that nothing is happening, everything is slow, etc, but from my point of view things are progressing incredible fast. To be only 2 years behind Arm in cores and 5 years behind in shipping SBCs is incredible progress from a standing start with the first spec only published in mid 2019 and most serious companies only starting work in 2021 or 2022.
Everything is going to converge with Arm, Apple, Intel, AMD right around 2027/8 or so. It’s been looking that way for several years already, and I see no need to change that prediction.
Achieving that in a decade from first crude shipping hardware will be amazing.
I guess they wouldn’t be going out of stock if people weren’t buying them up! It’s encouraging, really.
While I enjoy tinkering with my early-gen RV SBCs, I also very much look forward to being able to buy a practical RISC-V laptop or desktop replacement, so this is great news to me, and I agree it’s very impressive, with a lot of work (and capital) put in by both hardware makers and software toolchain engineers. As to whether I will be able to run Chimera Linux on it when it shows up, I’m not at all worried about that. If this distro doesn’t support it at first then others will, as long as the user base is there.
Meanwhile I think we software applications and systems people should be getting our own houses in order. Maybe it’s not that the available RISC-V hardware and emulation is too slow, but rather that big packages take too long to build!
We would have had faster than Pi 5 RISC-V machines right around now if US sanctions hadn’t nerfed the SG2380.
Beyond microcontrollers, I really haven’t seen anything remotely usable. I’d love to be wrong though.
I tried to find a Pi Zero replacement, but the few boards available all had terrible performance. The one I bought in the end turned out has an unusable draft implementation of vector instructions and it’s significantly slower than any other Linux board I’ve ever used, from IO to just CPU performance. Not to mention the poor software support (I truly despise device trees at this point).
The Milk-V duo beats the Pi Zero in every way, and that draft 0.7.1 vector implementation is perfectly usable using asm or C intrinsics in GCC 14+.
Just for a data point, the RP2350 chip in the Raspberry Pi Pico 2 includes a pair of RV32IMACZb* cores as alternates for the main Cortex-M33 cores: you can choose which architecture to boot into. The Pico 2 costs $5 in quantities of 1.
I don’t know why almost everybody descended from Unix is still so adamant that you only build for target X on target X. The only project I can think of going another way is Zig.
You can cross-compile in theory, but in practice most build systems for real world packages can’t handle it well… it’s not practical to make a modern desktop Linux distro build under cross-compilation (dedicated embedded distros do manage, but for a much more limited set of packages).
The best trick I’ve found is to run a slow native or emulated system as the build host, but then offload the actual compilation to a cross-compiler using distcc. As long as the cross-compiler is the same version as the native one and configured identically, it works well. A long time ago I used that trick to run Gentoo on an old ARM SBC. It doesn’t help with the rest of the build process, linking, or preprocessing, but at least C compile times become much faster…
Go, Python, Java, C#, Rust etc. all care about either cross compilation or running on multiple platforms.
Because in cross-compilation scenarios you are unable to test the software built and that is quite a major point in software distributions, where each package built runs its own tests. If something is broken and it won’t catch via tests, you’re shipping broken software to users.
The reason for doing it this way was that there wasn’t any hardware we could use for performance reasons; I had obtained a SiFive HiFive Unmatched board in October 2021 and this proved to be useless for builds as the performance of this board is similar to Raspberry Pi 3. Other boards came later, but none of them improved on that front significantly enough.
I don’t really get this. So if these folks were around twenty years ago, would they not’ve gone in to computing?
You can start something running, and so long as the platform is stable, you can come back for the results months later. Having self-built NetBSD on m68k and on VAX, and considering I compile pkgsrc binary packages for those architectures, all I can say is that it just seems like people are rushing for no good reason.
Also:
It burns a ton of power for how slow it is, because it fully loads a beefy x86 machine, and I’m not happy at all about that.
Are they running Intel? I’ve got a twelve core Ryzen 7900 here which, amongst other things, runs NetBSD/riscv in qemu
, and even with all cores at 100%, takes less than 120 watts. This system even has spinning rust.
Also, if qemu
is hanging, then there’s something wrong with the emulation or with the OS running in the emulation. NetBSD/riscv had issues for quite a while, but now it can compile and run for weeks at a time, unattended, compiling 24/7.
I wonder if there’s more to this than is written here.
i don’t think you recognize what it takes to build an entire package repository for a linux distro while actually giving a damn about what is shipped (i.e. taking care of quality control so that it’s as good as the other architectures)
the idea is that all architectures are roughly at parity when it comes to what is in their repos and how well the stuff that is in there works in practice; using slow hardware (the hifive unmatched is ~7x slower than the emulation and the emulation is ~5x slower than the second slowest builder) means that the architecture will always lag heavily in the build queue and i really cannot be bothered to let it hold back the rest, especially not in a rolling release system that keeps everything always up to date
you may have built netbsd on a vax or whatever but the expectations for that are far lower (i doubt you have to build things like firefox and kde for it and keep them up to date)
qemu-user randomly hanging is long known and gradually mitigated over time but never truly fixed, it’s not a hardware problem and it can be experienced on many configurations (and btw, no, it’s not intel, it’s ryzen 5950x)
the hifive unmatched is ~7x slower than the emulation
Only if your emulation has far more cores. The U74 is pretty close to parity on per core speed vs qemu on the best x86. For example building a defconfig Kernel on VisionFive 2 (a massively cheaper way to buy U74 cores than the Unmatched) takes 67m35s real time, 250m user, while on docker/qemu on a 24 core i9-13900HX it takes 19m12s real time, 584m user. I haven’t done a -j4 on the i9, but we can already see that the native RISC-V board uses less than half the CPU time of the emulation on x86. And only 3x the wall clock time. An 8 GB VisionFive 2 is $100. Buy three of them and they’ll have package building throughput as high as the $1500 x86.
The price-performance of current RISC-V actual hardware is already well ahead of the price-performance of emulation on x86, if you actually have to buy the x86 machine, not just use an idle one you have lying around. The energy-performance is also better. The VisionFive 2 uses around 5W, while that 3x faster i9 uses 160W.
The SpacemiT K1 machines are pretty comparable the VisionFive 2. They’re less powerful per core, and slightly less powerful over all – but only slightly. That kernel build takes 71m on my K1 board with -j8 vs, as above, 67m35s on the VisionFive 2. But the K1 boards have the advantage of being available with 16 GB RAM, which makes some packages easier to build, and allows you to do more simultaneous builds for smaller packages to keep the cores busy.
The new Milk-V Megrez is about twice as fast as the VisionFive 2, for twice the price for a 16 GB machine vs two 8 GB VF2s. So no huge advantage, but fewer machines are easier to manage, and running more parallel builds on a bigger machine uses resources more effectively. You can also get them with 32 GB RAM, which is even more flexible.
Overall I think the article reflects a poor knowledge of the current state of the RISC-V SBC scene, especially as it comes to price-peformance rather than absolute performance. If $650 was paid for the Unmatched then that’s a very bad comparison to the actually faster $100 VisionFive 2.
If I was building a RISC-V build farm today, I’d be getting 8 or 10 32 GB Milk-V Megrez 32 GB at $269 each.
That’s comparable to one decent x86 machine, will be a lot faster than emulation on that x86, and cost less than the Ampere Altra will have.
Milk-V Megrez 32 GB at $269 each
https://milkv.io/megrez#buy has two links:
These things go in and out of stock. So do new Arm boards. I’ve been trying to buy an Orion O6. I missed the first batch. The second batch is apparently arriving at the end of March.
Nothing unique to RISC-V. Watch for when they’re taking orders again, and then don’t wait too long!
The title is “Dropping RISC-V support”, which is pretty extreme. It’s not “Downgrading”, or “Making RISC-V best attempt”. That’s why I wonder about this.
i really cannot be bothered to let it hold back the rest
Why would it, ever? Sure, if aarch64 failed to build commonly used software, you’d probably take some time to look at what’s going on, but why isn’t it just made in to a “Tier 2” or “Tier 3” platform, with builds building what they can?
Forgive me if I’m naive - I know nothing about the Chimera build system - do builds need babysitting? Do failures require human time even if you’re just going to let whatever failed fail?
if a build fails, it needs to be investigated (the infrastructure picks up changes from git in real time and builds them as it goes); then a fix needs to be made, deployed, tested, waited for again (which may be a long time because in the time it’s been building that package very slowly, 1000 other updates may have been pushed, and the restarted batch may prioritize these updates first), or the template can be marked broken (if something in a batch fails, the whole batch fails, because it’s sorted specifically to account for correct dependency ordering, and there may be things depending on the failed package further in the batch), which means if it was previously built, it will remain in the repo out of date, and once revdeps start requiring the newer versions, they will also fail, etc.
with an unreliable emulator, many of these failures are spurious; if an emulator hangs, it needs to be manually canceled, or the builder will wait for several hours until it decides to kill it; with slow hardware, it takes longer than i’m willing to wait, and regardless, it adds into effort and burden
we put a lot of effort into making sure every supported architecture can build (almost) everything and that it stays in a good shape, i’m not comfortable with the idea of having something that is half broken, so regardless i’d probably be putting in the effort to fix things where possible, so might as well drop it
we put a lot of effort into making sure every supported architecture can build (almost) everything
This is a significant degree of difficulty and it makes sense that if this is your standard for support you’d be inclined to drop a marginal platform entirely, and it’s no skin off my back. I do want to extol the virtues of not trying quite so hard though! By comparison, the last OpenBSD release built ports packages for 9 architectures, as few as 8300 for POWER9 and as many as 12000 for amd64. Often the “slow” archs don’t finish building packages until after the nominal release day.
for reference (note how arch with least packages still has over 90% of the arch with the most packages):
q66@chimera-primary:~$ find /media/repo/repo-x86_64 -name '*.apk'|wc -l
14158
q66@chimera-primary:~$ find /media/repo/repo-aarch64 -name '*.apk'|wc -l
14097 (99.5%)
q66@chimera-primary:~$ find /media/repo/repo-ppc64le -name '*.apk'|wc -l
13989 (98.8%)
q66@chimera-primary:~$ find /media/repo/repo-riscv64 -name '*.apk'|wc -l
13611 (96.1%)
q66@chimera-primary:~$ find /media/repo/repo-loongarch64 -name '*.apk'|wc -l
13491 (95.3%)
q66@chimera-primary:~$ find /media/repo/repo-ppc64 -name '*.apk'|wc -l
13343 (94.2%)
q66@chimera-primary:~$ find /media/repo/repo-ppc -name '*.apk'|wc -l
12779 (90.3%)
the idea was always that every arch is fully usable; and the expectation was that at this point there would be hardware that is usable; but meanwhile, there is still nothing, there seems to be no effort to get something out, and i’m not particularly interested in supporting an architecture where the whole industry around it cares solely about “AI chips” and near-future e-waste
Well, that’s a bit harsh on the RISC-V ecosystem although not entirely unwarranted. FWIW I think you’re making the right choice here, and I’m excited to try out a non-systemd distro that cares about overall system quality and integrity. Others can nurse RISC-V linux through these awkward years.
Why not cross-compile?
I’m sure that comes with its own significant problems considering the massive hodgepodge of tooling used by all the different software that has to be built.
¯\_(ツ)_/¯ Yocto does it
Yocto claims to do it, but then it will give you broken output. Cross-compiling introduces far too many issues and is a massive maintenance burden to keep subtle bugs at bay, even something relatively easy to cross-compile like the Linux kernel will just give you wrong userspace headers if you do it.
It should be a sign of how unreliable, complex and prone to problems cross-compilation is that people would rather bootstrap three decades of Haskell compilers than trust the output of its cross-compiler just to get pandoc.
yocto gets alright output but it’s also entirely geared towards cross-compiling (i.e. every recipe is always-cross) and ignores everything that a general purpose distro needs to do; the package set is much smaller and it doesn’t need to worry about various things (e.g. it is never considered that you may be compiling out of tree kernel modules natively on the target machine afterwards, there is no requirement for cross-packages having the same featureset as native-packages because there are no native-packages, they can have really nasty patches to allow gobject-introspection things to crossbuild, etc.)
Yocto claims to do it, but then it will give you broken output
I have never seen this. Do you have an example?
Yocto cross compiles thousands of open source packages? That’d be quite the feat! I’d love to learn more.
https://docs.yoctoproject.org/dev/overview-manual/yp-intro.html
Yocto is a sort of distro toolkit for embedded linux systems, so cross compiling is very important for them. Actually, the most impressive part to me is the sstate cache which does a really good job of only rebuilding exactly what needs to be rebuilt.
They mention they have a loongarch builder, does anyone know what hardware they use for building? I’ve been meaning to extend my homelab..
it’s one of those 400€ boards based on loongson 3a6000, they’re very nice machines, not the fastest but very sufficient (quadcores but with ipc comparable to intel 14th gen/amd zen4)