What we heard about Rust's challenges, and how we can address them
27 points by emschwartz
27 points by emschwartz
This article read a bit like "many words without saying much". Most notably, unless I'm straight up blind I'm not seeing anything along the lines of "And here are the 12 issues created to address these problems". Merely recognizing that these problems exist strikes me as rather pointless, as people have been doing exactly that for years. It doesn't help either that at least the first draft was written by an LLM.
Now I'm not saying solving this will be easy or that Rust maintainers are doing a bad job, far from it. For example, LLVM is usually a significant source of the "compile times are slow" problem and there's sadly not much you can do about that other than pushing as little IR to LLVM or by not using LLVM in the first place, both of which require a non-trivial amount of work to create an alternative.
I think what I am saying is that Rust could benefit from having clearly defined leadership with a clear vision and the means to execute upon that vision. Not necessarily a BDFL, but at least somebody willing to say "This is what Rust will be in 10 years from now".
I feel Zig is doing a better job in this regard: say what you will about the language or some of the people working on it, but the hierarchy and vision are clear and (as far as I know at least) anything not within that vision gets the boot.
It doesn't help either that at least the first draft was written by an LLM.
God damn. I thought so. I could sniff out the hallmarks reading it, but I have enough trust in the Rust project that I kept reading assuming that my slopdar was misfiring. Not so, it seems. Now I feel like I've wasted my time and I've lost some amount of trust for the project.
I wish this kind of behaviour was more widely seen as completely unacceptable. It's spam, essentially. A cognitive DoS which burns up trust and goodwill for no good reason.
The headings gave it away for me.
I’ve read the async section multiple times and still cannot figure out what the three horsemen are.
This is embarrassing. They should retract the post.
I’ve read the async section multiple times and still cannot figure out what the three horsemen are.
If I'm reading it correctly:
I agree with you that it's very unclear and requires careful reading. I would expect an expression as vivid as "three horseman" to be explicitly spelled out.
both of which require a non-trivial amount of work to create an alternative.
In progress. rustc_codegen_cranelift is in development as a more Go-like "fast but limited optimization" compiler backend that they eventually want to make default for debug builds.
They're also working on Relink don't Rebuild to further extend the principles of incremental compilation to speed things up.
I'm extremely disappointed in the use of AI here.
I'm genuinely interested in these problems and their solutions, but summarizing things that I think a lot of us already know doesn't feel like effective communication.
For a long time, I've felt like a crazy person whenever I read very pro-Rust articles or comments and they say "compile times in Rust is a solved problem" because "just use a fast linker" or "yeah but incremental builds are fast".
Really glad there's now some actual evidence that I can point to confirming it's not just me.
There's a lot of ways to improve build times in Rust but oftentimes they actually require some effort and planning (e.g. splitting codebase into many crates, setting up faster linker), or at least avoiding biggest mistakes (e.g. rebuilding everything from scratch in your CI).
Having said that no one would mind faster compilation times. It's been a continuous effort to improve it.
I don't want to be too negative as I appreciate all the efforts to improve Rust, but I don't see much new/interesting info in this particular article.
It's kind of awkward because it's hard for any one person to speak for the entire project (which is a collective of hundreds of more or less independent people). These are definitely all things that people are working on but things like improving compile times are usually incremental improvements that tend to take a lot of work from a lot of different directions and they add up over time.
I see no "How" in the entire article.
Especially for things like compilation speed issue or architectural lock-in. These appear to be fundamental to the architecture of Rust. People have been banging on them for years and seem to be no closer to a solution.
To fix architectural lock-in, you MUST fix the Orphan Rule before anything else will matter.
Fixing compilation slowness means you have to somehow make the language file-local and there are far too many Rust constructs that aren't.
These would be very breaking changes and I think they're simply too baked in at this point.
Rust was a really nice first step, but we now need something else to take the next step.
Nothing about crates.io namespaces?
Go continues to lengthen its lead over Rust in terms of quality of the library ecosystem because if someone writes a great new Go library for YAML or XMPP or whatever they can just publish it, and I as a consumer can import "github.com/jdoe/go-yaml/yaml" without caring that there's like fifty YAML libraries for Go already.
This is not the case for crates.io, where the library name == the package name and every relevant technology already has a v0.x WIP taking up its name. I want to parse YAML in Rust? There's https://crates.io/crates/yaml but it just wraps a C library, I could write my own but then I couldn't put it on crates.io because the name's taken.
One of my projects is a FUSE library, but https://crates.io/crates/fuse is taken and crates.io won't let me namespace it as github.com/jmillikin/fuse or john-millikin.com/fuse. Same with a parser/disassembler using SLEIGH (taken by a libsleigh binding) or an implementation of SANE (taken by some abandoned unrelated thing with a dead homepage).
Whenever this is brought up the crates.io maintainers are like "just think of a short code name like yamaramarado or yonkers if you want to publish your library" because they think code names are cool and namespaces are unaesthetic, which is why I still reach for Go (or Java, hell even Swift) over Rust if I'm expecting to use lots of dependencies for a project.
Intentional sniping aside, there's little difference between jmillikin/fuse and jmillikin_fuse.
jmillikin/fuse would be a FUSE library written by jmillikin, jmilikin_fuse would be a library for "jmillikin FUSE" (whatever that is) written by anyone.
There's no expectation that (to pick a few arbitrary examples) google-cloud-bigquery is written/maintained by Google, or openai is written/maintained by OpenAI.
Also, I don't particularly care what the format of the namespace is. I used my GitHub username and personal domain name as examples because they're easy to understand, but if crates.io issued every user a personal UUID and I could publish my library as {1912c3dc-85b8-4598-bf67-0ad4056d5362}/fuse then that would work just as well.
and I as a consumer can import "github.com/jdoe/go-yaml/yaml"
and let me tell you, the magic of go mod edit -replace where the code can still have imports of "github.com/jdoe/go-yaml/yaml" but I can stop it from literally using that repo and instead use my locally cloned copy where I (have fixed / am fixing) a bug is awesome
But, it also cuts both ways when people do this shit in a repo (e.g. not on their machine but to everyone without warning): https://github.com/pulumi/pulumi-terraform-bridge/blob/v3.124.0/go.mod#L310 like goddamn just use a copy of sed to fix the imports in your code to not be a straight-up lie
is it bad practice to publish a repository with a replace directive in the go.mod file? if go developers know that import paths can be mutated using the replace directive, how is it a "straight-up lie"?
one way to achieve this might be to run your own registry, and publish it on crates.jmillikin.com as fuse. i am hacking on a service that allows users to quickly boot up cargo registries on their on their own domains, and publish packages on there. this might allow a sort of "namespacing" with crates. given that cargo typically operates on a single index (crates.io), increasing the number of registries would create N indicies for package resolution, and cause the version solver to slow down, is my guess.
also frustratingly, crates.io does not allow publishing crates that have non-crates.io registry dependencies in them:
Note: crates.io does not accept packages that depend on crates from other registries.
which inherently causes a centralizing effect: the whole cargo ecosystem is beholden to crates.io now. perhaps there is a need for a min-ver based package manager for rust (that uses a similar package resolution mechanism as go?)
Running my own Cargo registry is an option, but frankly I don't want to. It seems like a huge hassle. I want to upload a source archive to a third-party host and then not worry about it.
Other build systems let the user specify a URL and checksum, so you can put a .tar.gz pretty much anywhere and it's just a regular static file. The whole C/C++ ecosystem works like this -- there's no package registry for (e.g.) zlib, you just put (urls=["https://github.com/madler/zlib/releases/download/v1.3.1/zlib-1.3.1.tar.gz"], sha256="...") in your build config.
Cargo is relatively unique among build systems for low-level languages because it's imitating the tooling found in the web ecosystem, where there's an expectation of the build tool connecting to an API to query available packages. Which in turn means you can't just put a tarball on a static hosting service, you need some sort of API server and all the annoyance that comes from that.
The centralization of the Rust package ecosystem around crates.io would be only slightly annoying if it reduced its scope. Right now crates.io serves as:
(package, version) tuple.yaml or fuse crate because that's crates.io policy.I have no objection to (1) and (2). Package catalogs have always existed (e.g. Freshmeat), free source code hosts have always existed (SourceForge, the GNU mirror network, modernly GitHub), and crates.io bandwith/uptime is better than a lot of free hosting.
It's just (3). I wish crates.io would stop it with (3).
I'd personally really like to see the error story in rust evolve a bit more. It's a little scattered for my taste. While also controversial I'm often concerned with the amount of dependencies pulled in with lots of the popular packages used. That being said I'm often an outlier on that latter part. I'm sure in time the ecosystem will mature as it's used in more safety critical portions of the internet.
I fully agree with you on the latter. I pride myself on building most projects with just clap as a dependency and seeing most projects depend on hundreds of packages at $WORK Im often scared of updating, since breaking changes and semantic versioning is a nightmare.
Same thing here even with hobby projects, I'm a little worried just because most other ecosystems *cough cough NPM * haven't solved this yet so there's no real established solutions (I guess it's sort of a social problem). I trust the foundation to make good decisions here
The problem is that having fewer but larger packages is illusory, because you're still trusting at least the same number of people... possibly more since you can't omit the stuff you're not using from the build chain as readily.
That's why tools like cargo-supply-chain exist... to focus on number of people, not number of packages.
Personally I'd just rather multiple small packages from the same author get rolled into one package, similar to how tokio does it. Then from there I can opt in or out of features I do / don't need.
Ideally some things like error popular handling get folded into the std but I understand that there's lots of lift that has to come with that.
Either way once larger companies get more involved I think I get a bit less nervous. I'm for example not going to need to audit tokio since it's so large and companies like discord and cloudflare use it regularly. They'd see issues far before me and just have a vested interest to keep those dependencies funded and safe
Personally I'd just rather multiple small packages from the same author get rolled into one package, similar to how tokio does it. Then from there I can opt in or out of features I do / don't need.
That slows down building currently, because the parallel frontend for rustc is still in development and the crate is the unit of compilation, not the file.