Golang’s Big Miss on Memory Arenas
25 points by telemachus
25 points by telemachus
It's not like we've stopped thinking about new approaches to memory management.
The post mentions neither memory regions (https://github.com/golang/go/discussions/70257) nor Green Tea (https://github.com/golang/go/issues/73581).
While the former may not pan out (it still needs an implementation, though it remains promising) and the latter can be thought of as incremental (though on the GC side it's one of the biggest improvements we've had in years and, as far as we can tell now, has created new room to improve it), we've been exploring the design space more widely and have found some promising pockets in it.
If you choose lower-level languages like Rust, your team will spend weeks fighting the borrow checker, asynchronicity, and difficult syntax.
This is SO tiresome to read. Has the author experienced this, or is it just spreading FUD?
My experience is that writing the core of a project in Rust can definitely devolve into fighting async. The syntax is just something you deal with, in any language. And "Fighting the borrow checker" is the same as saying "fighting bugs". But in any case, this is for the core of the project. Once that's done and/or is maintained by a small set of developers, the rest of the project built on top of that core becomes pretty boring, and the fact that it is Rust or not kinda disappears.
I'll also say that those that claim that Go is trivial and can be learned in one day... yes, sure, you can learn to write bad code in one day. Learning all the idioms in Go takes time, and writing Go idiomatic code is not easy either. It just feels more productive at first, but if you aren't careful, you are gonna pile up complexity anyway and the language isn't going to protect you about it once you decide it's time to refactor it.
Sorry, I had to say it :P
It’s a legitimate “fighting the borrow checker” complaint that some standard ways of organizing data structures don’t work in Rust. Notably, self-referential structures and lack of partial borrows in signatures are stumbling blocks.
As a specific example, coming from Go, Java, or many other languages, you’ll probably start by writing a bunch of &mut self methods. Then you’ll discover you simply can’t do things that way, because they’re nearly unusable. Then you’ll realize “oh, that’s what all these split methods are for” — a pattern I’ve never seen in any other language.
I mean, technically you do stop fighting once you figure these things out, but it still results in weird-looking code.
This is SO tiresome to read. Has the author experienced this, or is it just spreading FUD?
Yeah, it comes off as one of those "trying too hard to fit into the club" attempts. Like a novice programmer saying "PHP sucks! am i right??".
I appreciated it though. It saved me some time from reading further.
And "Fighting the borrow checker" is the same as saying "fighting bugs".
Don't disagree, though the two aren't the same. There's legal/bug-free programs the borrow checker rejects (today).
Seems like a good decision for golang. They prioritize simpler interfaces and having fewer features over performance.
There's no way they would add anything like lifetime annotations to make borrowing from arenas safe. Some magic for passing contexts implicitly could help, but that would be a new "clever" language feature that would change language idioms, and that's not Go's thing either.
I disagree with one of the first arguments: writing high performance go is possible but only a top tier programmer can do it so languages with manual memory management or arenas are the solution.
I think that basically if you can figure out how to correctly use an arena or correctly do manual memory management, you can correctly write the unidiomatic go code. The challenge is understanding the problem, and when you understand the problem, you understand the solution.
To be specific in unidiomatic Go, the tools you would use to implement "manual" memory management would be everything from sync.Pool to pre-allocating slices ala arena := make([]Foo, 1000). You can literally make your own malloc if you really want to using unsafe. I won't claim this will beat Rust/LLVM, but remember that go also has cgo which at least at one point had a benchmarked overhead of 40ns.
Consider that people have written HFT stacks in java. Obviously you are going against the grain when you avoid memory allocation in a GC language and you do end up writing c-like code, but the advantage is that you're still using a memory safe language. And even if you use unsafe/jni in one part of your code, all the other parts still get the convenience of a GC language.
The cost of cgo will likely be a bit better in Go 1.26 (https://go.dev/doc/go1.26#faster-cgo-calls, ~30 ns), and the scalability bottleneck the person in that blog post observed will (I think) also be fixed (https://go-review.googlesource.com/c/go/+/646275/41).
I’m a bit annoyed by this post, but it is due to a common issue.
I am not a rust dev, I work primarily in C++. But I know how to write code in rust - I spent a few months at a company writing code in rust a few years ago, prior to a bunch of the improvements around the borrow checker’s management of lexical scope. It was /fine/. There was no terrible fight with the borrow checker, it did not take weeks to things due to fighting the borrow checker.
The borrow checker, even then, was reasonably understandable. There are some practices that can be used in other languages that don’t work in rust - but this is true for any pair of languages you choose to compare.
I am increasingly convinced that there are two core groups of people responsible for the “omg the borrow checker is so hard” mythos.
The first group is people who have come from managed languages to rust with the mindset of “I can write low level code without having to do memory management”. People in this group seem to be coming into it thinking that not worrying about memory management means not thinking about it. For them the borrow checker is a legitimately hard thing to learn, but the difficulty is not “the borrow checker” it is simply the requirement to be aware of object lifetimes. Not in the rust annotation sense, but the concept itself - it would be just as hard, if not harder, for them to move to any non-GC language.
This is not to denigrate the skill of these devs, anymore than someone talking about my C experience not helping me write SQL.
I think the other group is the problem though: C, C++, etc developers who understand memory management and move to rust. But then keep trying to write what is effectively just C/C++ code with a different syntax. Because it’s low level and similar in syntax it’s not quite as overt, but essentially what they’re doing is akin to a C dev going “I tried to write Haskell but I had to use monads everywhere” because they were still trying to write C.
The result is that people unfamiliar with memory management come to rust and find the borrow checker confusing, but when they search they find all these people who do understand memory management that are also complaining about it.
The result is that they think their problem is that the borrow checker is hard because it is too complex or confusing, not that they haven’t learned to think about the lifetime of an object
A language feature I wish more languages would borrow is dynamic scope. Yes global variables are bad and cumbersome, and algebraic effects compose nicely, but a good middle ground are dynamically scoped variables. Something like this (pseudo code)
var CONTEXT = 1
fn foo () :
if CONTEXT is Some :
if CONTEXT > 10 :
return
print(CONTEXT)
with CONTEXT = CONTEXT + 1 :
foo()
foo()
would print 1 2 3 4 5 6 7 8 9 10.
You can use this to dynamically scope systems like allocators, i/o handlers, exception handlers, effects handlers, graphics contexts, etc. It's a surprisingly easy way to build composable systems without alternatives like inheritance or additional function args.
There was an interesting blog post about context in Go that suggested using dynamic scope instead: https://blog.merovius.de/posts/2017-08-14-why-context-value-matters-and-how-to-improve-it/
I think if you had goroutine local dynamic scope per package instead of package global variables, you would have a more elegant system for inherited context that is both typesafe and goroutine safe.
I don’t think goroutine-local would be very useful. It’s a feature that context works independent from goroutines, not a bug. Imagine if you passed a context to something, and it used multiple goroutines or a pool of them behind the scenes.
The title is click-bait, and the author makes his case in awfully broad strokes (to put it charitably). But I'm still curious what people think.
Years ago, Context was introduced as an opt-in feature for timeouts, but it effectively "infected" the entire language. Today, nearly every function signature begins with ctx, and the Go team hated the idea of a "second Context" in the ecosystem.
People often say this but it's just not true. Context is only used in code that wants cancelation, that's not all or most use cases. The author's example Go repo esbuild has zero uses of context. In the stdlib context is only used I believe in two other packages (net and signal).
The fact that people use Go mainly for networking means it appears a lot in that space but it's not a language-wide concern. It's worth noting this issue of cancellation is often simply ignored in other languages.
Context is only used in code that wants cancelation
That’s no true. Contexts provide cancellation and dynamic scope (e.g. for request-scoped variables such as a request ID or username). While cancellation is a (mostly) network-specific issue (it’s also relevant in compute-heavy tasks!), dynamic scope is a generally useful technique. Moreover, in order for code that wants cancellation or dynamic scope to receive a context, it must be passed a context, whether or not the caller itself cares about cancellation or dynamic scope.
In my opinion, there should be a single call to context.Background; it should be made on the first line of main, and every function should have ctx context.Context as its first parameter.
If there ever were a Go 2 (there almost certainly won’t be), I would strongly support making contexts a syntactic part of the language itself, so that we need not import context into every single other package and add a parameter to every single function.
You can look for references to Context at https://cs.opensource.google/go/go/+/master:src/context/context.go;l=71;drc=91267f0a709191d8f1e3f4fa805c9d88856f9957;bpv=1;bpt=1 (click on the Context name) which tells you there are ~700 references to Context in the go repo. Unfortunately the UI doesn't show all of them and you can't restrict it to only show public stdlib packages. Anyway, I'm sure at least slog uses it too.
It's easier to check the API: https://github.com/golang/go/tree/master/api
A very cursory search shows 88 functions/methods that take a context, out of 11764: roughly 0.8%.
You're right, I forgot slog and database/sql, plus a couple other packages sprinkled on.
As a JavaScript/TypeScript developer and the (main) author of possibly the slowest JavaScript engine to cross some arbitrary level of "pretty close to feature complete", I have to say: I doubt there is any fear of JavaScript eating Go's lunch on the performance side of the table. V8 and others of course do amazing things, but there are just such deep and fundamental issues with JavaScript objects that it's hard to imagine things majorly changing between JS and Go.
Though I guess there are possibilities: shareable structs have been proposed for JS and would solve at least part of the fundamental issues, and JS of course has TypedArrays that can be used for arena allocation if you really want to get close to the metal (and why wouldn't you!).
It's definitely going to land you in the realm of "nonidiomatic code" though :)
This is why so much infrastructure is written in Go.
It’s why Evan is the sole contributor to the project.
These absurd assertions and the general LLM-geberic-journalistic tone make me stop reading before I even got to the main argument, sadly.
It’s brilliant code, but it’s not the kind of Go most teams write or can maintain.
I looked at the code, and I don't see anything hard to maintain.
The next sentence is also false:
It’s why Evan is the sole contributor to the project.
One person has almost all the commits, but there are 119 committers on https://github.com/evanw/esbuild
Let's say 50% of them commit doc typo fixes, which don't count
That's still ~60 people who can modify the codebase. That's more than most open source projects
This product manager/tech 'advocate' way of talking about programming languages is the only thing more obnoxious to me than programming language fanboy behavior