Go is still not good
75 points by rrm
75 points by rrm
Reasonable post, but just because Go has all of these warts doesn’t mean that Go isn’t good. Go is good for the use cases it’s meant for. It’s a good little (opinionated as hell) language.
Engineers make perfect the enemy of good, all too often. It is good to have that quest for perfection, but not good to be ruled by it.
Agreed, there’s a lot of things I don’t like Go for (personally, because it’s largely subjective), but let’s be real for a moment: “good” is a comparative rationale.
If your choice is Java, Ruby, Python or Go; you won’t really go wrong with Go.
It’s “good” in that it’s easier to make it faster than Python and Ruby and has a much easier development and deployment story than Java. I mean, that’s not just good to be honest thats GREAT.
Just because Go doesn’t tick every single box, doesn’t mean it’s not good, even if it has a lot of things that I personally dislike.
I haven’t met a language that isn’t unlikeable in some way. I find Go unlikeable in a few ways, and other languages, generally, in more ways.
Go isn’t good…but it is good enough.
I wanted to like it, but I’m not a fan of Go from the few times that I have used it. But I admit that it’s a fine language, and it does its job pretty well. If I had to use it, day in and day out, I would learn to love it. In the meantime, I will simply respect it.
Look, it’s an opinionated language, and the way that it’s opinionated is different from the way that I’m opinionated. Neither right nor wrong, but definitely different. But the Go opinions were based on rational arguments, even if I personally think that the designers came to the wrong conclusions. (Rationality doesn’t guarantee correctness.) So there’s plenty to respect, and nothing to disprove.
And “good enough” is hardly an insult in our world. It’s actually quite a compliment.
Reasonable post
Strongly disagree with that. I would say the post correctly identifies a couple of real issues (and blows some other things out of proportion), but the tone is completely unreasonable.
It didn’t have to be this way
We knew better. This was not the COBOL debate over whether to use symbols or English words.
That is not a reasonable attitude. If you think you can do better than Ken Thompson, Rob Pike, Russ Cox, et al. go ahead and don’t keep the rest of the world waiting. 🙄 Go is a pretty good language with some flaws. You can debate whether the flaws are deal breakers for you or just unfortunate warts, but pretending like they should have just gotten gud is silly. Anything anyone makes will have flaws caused by the blindspots of the team that makes it, and also just plain bad luck. This is the nature of the world.
Go fans could have endless rants about how Rust’s slow compilation times are signs that the team are a bunch of idiots who don’t know anything about programming blah blah blah… Or we could acknowledge that there are tradeoffs and the Rust team had different priorities and they made a few mistakes and the result is slow compilation, which now smart people are working to mitigate.
Go fans could have endless rants about how Rust’s slow compilation times are signs that the team are a bunch of idiots who don’t know anything about programming blah blah blah…
I am bothered by this comment. Mainly because it appeals to the worst side of people like me. In other words, as a human, I have this very petty “voice inside my head” that wants to be heard (i.e. heard by the world, since I already hear him incessantly), and I have to work very hard to not let him use my keyboard or my vocal chords, because he’s such a complete douchebag. And he says stuff like that (”Rust’s slow compilation times are signs that the team are a bunch of idiots”), all the time.
Rational me wants to believe that Go fans are too busy having fun building cool things in Go to worry about Rust compilation speeds. I also want to believe that Go fans want Rust fans to be too busy having fun building cool things in Rust to worry about Go syntax (or whatever). Rational me wants to believe this because someone else’s joy should – if anything – add to my own, not subtract from it.
If you think you can do better than Ken Thompson, Rob Pike, Russ Cox, et al.
We should reject the “appeal to authority” argument. Not because it’s a bad argument, but because most of the smartest people we’d use in these appeals to authority would tell us to knock it off – because they’d love nothing better than for us to go out and build something better than they were ever able to accomplish themselves. And the one guy who’d be absolutely sure that you couldn’t possibly do a better job than he did is Bjarne Stroustrup, which should tell you everything you need to know! 😂
“Reasonable post” Strongly disagree with that.
To be clear, I think that the technical content of the post was reasonable. The title (“Go is still not good”) was not constructive, and definitely appeals to that douchebag inner voice that I’m trying to prevent getting any more access to my keyboard.
But it’s also important that we do not allow ourselves to attach our own sense of self worth to the tools that we happen to be using, or to the tools that we happen to enjoy or find ourselves emotionally attached to. And this is still a work in progress for me.
Go is good at being very useful. Turns out none of the criticisms against it are really obstacles to that objective. All manner of software has been written with it, software that is almost always reliable, efficient, whose codebase is not thrown out and rewritten at every turn.
One day Go will be used to write software for a cold fusion nuclear reactor solving all of humanity’s energy needs and a new article will appear saying “Go is still not a good language”.
It’s like complaining that Phillips head screws are bad design, that Robertson or Torx are superior, ignoring that our entire civilization is built on Phillips screws.
To criticize Philips head screws is not ignoring that our civilization is built on them. There are many things that our civilization is built upon that we’ve found better alternatives for nonetheless.
And there are even more things that used to be essential but were eventually replaced by those alternatives. Civilization was at different points built upon horses, steam engines etc. but no longer is.
If we can’t criticize the most widely used pieces of technology, we can never move beyond them. You will find Philips head screws in fewer new designs (usually replaced by torx) than in the decades before precisely because people identified the problems with them and are looking for better alternatives.
Yes, in fact if our civilization wasn’t built on them we probably wouldn’t be complaining. In order for something to be complained about it needs to be 1. Used and 2. Not perfect. Most things aren’t perfect so the things in widest used get complained about the most.
So if you see people complaining about Go it is because it has lots of attractive qualities but also has things that people dislike. Which is to be expected of just about anything.
almost always reliable
I don’t feel this way about Go. Maybe it is less likely to crash than say Python but it is far more likely to just do the wrong thing “successfully” which is usually much worse.
I think the biggest offender here is every type having a default value. If I had a dollar for every time a bug I was experiencing was the result of a map lookup returning a default value I’d be able to afford a pretty nice coffee. The append
function in the article also causes bugs I have seen in the wild. printf
writing junk into the output rather than failing when used incorrectly has also bitten me once or twice in the past, but this is caught by first-party tooling pretty reliably (but not the compiler) so it isn’t as common.
This seems to be a bit of a pattern. Maybe Go has just as many bad APIs and footguns as Python or Ruby but they tend to result in doing the wrong thing quietly rather than failing. And I’ll take a failure over corrupted data any day.
If I had a dollar for every time a bug I was experiencing was the result of a map lookup returning a default value I’d be able to afford a pretty nice coffee.
IDK, I used to work with a Python team that somehow was completely confused about how dict
and .get
and .set_default
worked to the point that they would write their own little utilities that just duplicated the built in methods. I think lazy devs are just always going to struggle with dictionaries.
I mean sure, some people will go out of their way to write bad code. But in Go there is quite a bit of footguns in the standard library/builtins. I love that []
throws by default in Python rather than returning None
. But Go is even worse than returning nil
because it can return a default object that might be meaningless. (In theory you should make sure your default objects are “reasonable” but people won’t do this perfectly for every type.)
Really the only language that made a worse choice here is probably C++‘s std::vector
which is just undefined behaviour. But that is sort of how C++ rolls and I don’t think they get many back-pats these days for it.
I think returning None
would be a shitty default because None
tends to get passed around and explode down the line in an unrelated place when you do a string operation on it. But for a statically typed language like Go, returning an empty string for a string->string dictionary is really useful and makes the code a lot simpler.
I’m surprised unused variables blocking compilation didn’t make it into this list. That makes me see red every time I encounter it. (Yes, IDEs can work around this, etc etc.)
“There are two kinds of languages: ones that some people hate, and ones that nobody uses.” (Paraphrasing Stroustrup)
I’ve never understood this aphorism. A language is popular… therefore we can dismiss criticisms of its design decisions out of hand? I can see why the creator of C++ would say that though lol
It does make some sense, in that people don’t complain about languages that they never get time to try out. It’s Bjarne, so I would have to guess that he’s saying or at least thinking “people complain about C++ because they’re actually using it, unlike all those other loser languages that no one uses”. And he’s not wrong, in that particular regard. OTOH, the part he’s refusing to acknowledge is the stunningly high correlation between “people who have actively used C++” and “people who complain loudly about C++”.
I think it is more a perspective thing. If you hear a huge volume of complaints about Go and few about Haskell, it isn’t necessarily because Haskell is better. You sort of need to weigh it by popularity. Plus complaints often come from places of passion. People typically need some amount of interest in a language to identify the flaws and care enough to complain.
that append
example is insane, what is going on there??
append()
will add to the currently allocated buffer for the slice if it still has capacity for the element. In that case, the return value will be the same as the argument you pass to it.
But if there isn’t enough capacity, it allocates an entirely new buffer, copies the existing elements over, appends the new ones, and returns that.
So append()
may either behave like an in-place mutating append or a persistent “return a new object” API with the choice depending on a mostly hidden capacity number.
It seems like a very brittle, confusing API. If you are rigorous about following the idiomatic use where you always assign the result to the same variable as the first parameter, then it tends to work out fine. But heaven help you if you don’t know to follow that idiom.
A more bullet-proof API would be for the first parameter to be a pointer to a slice (i.e. T[]*
). Then append()
could do the assignment itself and ensure that it is always overwriting the passed slice reference. But maybe the double indirection was considered too confusing.
Even that wouldn’t fix the unclear ownership problem though.. Take something like this:
a := []string{"foo", "bar", "baz"}
b := a
a = append(a[:1], "spam")
a = append(a, "eggs", "salami")
fmt.Printf("a = %v, b = %v", a, b)
Should appending something to a
implicitly mutate b
? This case wouldn’t wouldn’t at all be helped by taking a
as a pointer, since &a
is still distinct from &b
even if they mutate the same underlying buffer.
Fixing it would take either hiding the buffer behind a double pointer (á la Java, []T
becomes equivalent to the current *[]T
, mutation always propagates), making it always copy (á la Erlang, the copy could be optimized away if you are known to be the only thing referring to the buffer, either statically or by runtime refcounting), or moving to a write-xor-share memory model (á la Rust).
This is the best comment, illustrating the tradeoffs.
I love Go, and yes, I’ve been bitten by append()
a small handful of times til I learnt the rules (as per sibling comment) — but at the same time, elucidating why this is a tradeoff is important, and once you’re used to the rules the language has chosen you’re generally gonna be okay.
Go seems to have chosen this particular tradeoff to maximize “sliceability” — where arrays are almost always slices as opposed to direct pointers, which means they may get reallocated or whatever. You can have the direct case in Go (eg, [32]int
) but it’s very very handy to just let the slices be slices across function boundaries (as opposed to the glue required to differentiate a view from an array, a la Python, and .items() from a map or something)
I don’t know where I’ve read it, but the problem with Go’s slices are that they mix/confuse being a slice and being a vector, for absolutely no good reason. It’s a bad abstraction.
I think you could probably make a version of Go that handled arrays and slices better, but I wouldn’t say it’s like this for “no good reason”. It was an attempt to improve C, and it’s certainly better than C. I think it might be even better if instead of just arrays and slices there were arrays, vectors, and views (where vectors own their backing and views just borrow it), but it would be a pretty huge conceptual shift and I don’t think it could have been done with the pre-generic Go.
I believe the slice is just a struct passed by value, so it always needs to return a new value to update len field, even if the backing array doesn’t grow. Edit: So I don’t think it’s an idiom, it’s really the only way to use it.
That is correct, however because it still updates the backing buffer in-place appending to any slice you did not create locally can have very odd side effects as it’s effectively shared state.
Slices and maps are pointers. Which feels unnatural and surprising given that structs are not. And then append() returns a new slice, but also may or may not mutate the slice argument.
Once you know this, it’s easy to stick to some rules, and linters catch some traps. But it feels like something went deeply wrong and then doubling down.
Slices are not pointers which is much of the issue. Slices contain a pointer to a backing buffer, but the slice’s length and capacity are stored “on the stack” next to the buffer’s pointer.
Technically not, which is precisely why they are so misleading.
You equated slices and maps, maps are just a pointer, and behave completely differently.
I meant to clarify that slices are technically not pointers, not to say that you’re technically not correct.
I equated them in terms of the data being pointers, yet not having the usual pointer semantics. The technical details of len/cap being on stack, or all the built-ins around making this work, is beyond the point. The API is not a result of some constraint, it is a designed exception to the language to make slices and maps ergonomic. But it’s a leaky abstraction, as you need to eventually learn all these little details.
The fun part is that they can be set to nil
, adding to the confusion. But maps, which can also be set to nil, don’t have the same mutation semantics and require allocation since there’s no append
equivalent that would do it for you.
Yup, a nil slice is going to yield {ptr: nil, len: 0, cap: 0}
, so append
will realloc the buffer and return its capacity. But a hashmap, being a straight pointer, can not be updated that way (well technically nothing actually prevents it), so it’ll panic if you try to set an entry.
the “defer
is dumb” argument refers to javas try and python’s with, but apart from the syntatic difference of it being a separate block i dont really see a difference? you still don’t know which resources need to be destroyed manually unless you read the documentation when you’re writing, and when you’re reading you’ll usually know in like the next 1-3 lines. i would even argue that it being semi-implicitly destroyed in try/with blocks is worse than just deferring, though that is more of a personal preference ig. am i missing something?
One problem is defer
being function-scoped. In for example Rust, RAII is block-scoped. That makes it more than syntactically different:
let username = {
let mut file = File::open("/path/to/file")?;
let mut buf = String::new();
file.read_to_string(&mut buf)?;
buf
};
// more code...
is workable if verbose Rust, with the file descriptor closed on scope exit. So syntactically, file
and buf
do not pollute the outer scope, and semantically, the file is closed as soon as we’re done with it. In Go you would need extra function scopes for earlier defer
s. That introduces an extremely hard problem, naming all these functions ;-)
In Python, for the context manager protocol, you can have def __enter__() -> T
, which in with Foo() as t
: will bind t
to the T
returned by __enter__()
– meaning if as an API user you want a T
, you are guided to attain it through this managed approach. That means you (a) cannot forget to manage the external resource and (b) do not need to remember which method to call exactly (Go has io.Closer
but with
is a language built-in – works across io, locks, anything really)
You don’t need to name the function:
username, err := func() (string, error) {
file, err := os.Open("/path/to/file")
if err != nil {
return "", err
}
defer file.Close()
return io.ReadAll(file)
}()
It’s non-intuitive behavior, that has led to a number of bugs. It’s much more common to defer to the end of the scope (RAII and try-with-resources all do that), and it’s even more “easier” on the runtime (no need to use a dynamic “stack” data structure - think of deferring n stuff from within a loop).
It is especially problematic with locks, where it could lead to deadlocks.
I have no real experience with Java, Python or Go. But in C# when you use something that is IDisposable
and requires either manual handling or using the using
statement (like Python’s with
), the compiler will yell at you if you do not handle it (by default it is just a warning though, a much too common pattern with C#).
I don’t think the double defer
is needed if a named return is used. Example:
func example() (result string, err error) {
f, err := openFile()
// ...
defer func() {
err = f.Close()
}()
// ...
return result, err
}
Yes these arguments are right. It is so easy to tell how others made 10 stupid decisions among the 100000 they had to make for a language. Congrats. Show me your language. Ah, you don’t have one.
Yeah, you tell’em, it’s not like most of the criticisms against the langage were not leveraged as soon as it was unveiled. Oh wait they were.
(aside: the weird English word we use is “leveled” (or “levelled” in the UK) not “leveraged” https://idioms.thefreedictionary.com/level+against … I have no idea why.)