Go is an 80/20 language
70 points by carlana
70 points by carlana
As someone who is responsible for 100K+ SLOC out of a 1M+ Go codebase which is full of:
I feel like the author must surely be having a very different situation at work to have written statements such as “[the absence of generics] served Go well for over a decade.”
I’m literally in the process of migrating our testing assertions to a small custom library which diagnoses more type errors statically, using generics.
Go may be an 80/20 language, but one needs to ask how many areas of a sufficiently large codebase are going to fall in the “20” bucket.
One of my favorite things about Go is how easy and idiomatic it is to use domain specific types compared to other languages that make you jump through hoops to newtype a primitive (e.g., type MyID string
). Why isn’t your code base doing this, and why do you seem to be faulting Go for this?
I wish ago had sum types too, but at my company we define a C-like enum for the different variants and make it the tag field on a struct to tell us how to interpret the other fields. The result is functionally a sum type but lacking any compiler support to enforce comprehensive pattern matching. It does, however, make it clear what variants are available. I’m not sure that any kind of sum type alone is going to make it clear what state transitions are available (although there are other ways to make state transitions explicit), but maybe I’m missing something here.
I’m not sure what the specific objection is here. We have a bunch of ad hoc iteration patterns too, but I’ve never been chafed by it—our iterators are typically very localized though; if you have them as part of your public interface that would probably be bad. If it’s causing problems, your project leads should probably just tell people to implement the standard library iterator interface, right?
I’ll take your word for it, although I’m not really sure how to use a map instead of an ordered map without a bug, and map[K]struct{}
is a set.
I’m curious about how assertions would help clarify whether pointers are mutable or not in a given part of the code? Go has assertions in the sense that you can check an invariant and panic if it doesn’t hold (and yes, this is perfectly idiomatic), but I don’t see how that helps you ensure pointers are not used immutably? Personally I wish Go had a way to define a pointer or value type as immutable.
- One of my favorite things about Go is how easy and idiomatic it is to use domain specific types compared to other languages that make you jump through hoops to newtype a primitive (e.g., type MyID string). Why isn’t your code base doing this, and why do you seem to be faulting Go for this?
I believe the difference is that when someone specifies a “domain specific type” they generally mean one enforced by the compiler, ie. you can’t sub in another type for it. type MyID string
is a type alias, and not actually a new type. You can use MyID
and string
interchangeably.
While this provides benefits for readability and documentation, its very different from the compiler enforcing that this parameter is always MyID
. The latter makes small bugs that can crop up in code (especially during refactors) to be virtually non-existent.
No, that’s exactly wrong. It’s a new type. You cannot use MyID
and string
interchangeably.
With an equals sign, type MyID = string
, you get a type alias. This was added in Go 1.9.
assertions, which Go refuses to add
Go’s most recognisable syntax is if err != nil { return nil, err }
. Are these not assertions? You can replace the return
with panic
if you prefer.
Many of your other problems sound like the issues that plague all large codebases. Any language will end up at 1M LoC with a mishmash of typing approaches, loop forms, and so on.
“Any large codebase will have problems” is a terrible argument to justify additional, avoidable problems beings layerd on top of the “normal” ones.
Yes, I understand the assertions can be implemented in a library. :)
The point about assertions is not one about technicality, but one of culture. By putting “why Go doesn’t have assertions” in the Go FAQ, the Go authors have enshrined “assertions/panics are bad” as a part of the cultural fabric of Go.
This makes it difficult to have nuanced discussions around safety, fault domains, checking invariants, distinguishing between different kinds of ‘errors’ etc. – when I’ve tried to do so, I’ve been met with strong pushback with “this makes Go code complex and the USP of Go is that it is simple” or even just “that’s not how we do things in Go”
Maybe I’m misunderstanding what you mean by “assertions”, but the Go authors don’t say panics are bad, they say not to use panics for error handling. Using them for asserting invariants is precisely the reason panic exists in the language, as far as I can tell.
Assertions are not for error handling; they are for expressing invariants. I think it’s a mistake to not include them in Go. TigerBeetle (written in Zig) is an excellent example of how to use assertions in the right way: https://github.com/tigerbeetle/tigerbeetle/blob/main/docs/TIGER_STYLE.md. I hope we can do this in Go one day :)
No one is suggesting assertions are for error handling, but I don’t understand what’s stopping you from implementing TigerBeetle’s assertion guidance in Go? The document you linked suggests that an assertion is exactly a panic when an invariant does not hold, so why do you think that can’t be done in Go? Assertions (by this definition) are idiomatic in Go even though Go doesn’t idiomatically use multiple assertions per function call as TigerBeetle suggests.
You’re right that I can do this is in Go. My comment was only referring to the explanation provided in the Go FAQ, which I don’t really agree with:
Go doesn’t provide assertions. They are undeniably convenient, but our experience has been that programmers use them as a crutch to avoid thinking about proper error handling and reporting.
Ah, that makes sense. I don’t really know that I agree with the Go FAQ here, but I also don’t really want to encourage a TigerBeetle assertion style either.
A common idiom in C# is to use Debug.Assert
to express your invariants (plus unit tests). Then you get checks during debug builds, but then they’re compiled away in release mode.
That is the C style.
It’s really cool, because it means the assertions are missing in production where you’re most likely to encounter unexpected conditions and thus need them.
And yet, despite all those warts, Go did the job for your 1M+ SLOC codebase. Are you sure you’d have chosen a different language when that project started ?
Sorry, I’m not sure what point you’re trying to make. When the codebase started, I was learning how to program in C++. So I don’t think my opinions at the time, if any, should be taken seriously.
A more interesting question is, and perhaps I think the one you meant to ask, is that given what I know now, if I were starting all over again creating the same codebase, would I choose Go? My answer to that is ‘probably no’, but it’s a difficult question to answer given the large number of variables that factor into foundational tech choices. One tricky factor is that even I were to choose Go, I would probably use a style guide similar to the one I wrote for work now, which does go against existing Go guidelines in some ways.
One tricky factor is that even I were to choose Go, I would probably use a style guide similar to the one I wrote for work now, which does go against existing Go guidelines in some ways.
But this is probably also true for other languages. At my current job we use Scala, and the code written today has some (unwritten) guidelines “for things to avoid and things to do” that is different from the ways that Scala community does things. And that is fine, when you start to have massive codebases you start to see a few things differently.
Yeah, I didn’t mean to imply that it’s a bad thing necessarily. It’s a matter of degree.
I haven’t been in the position of hiring people directly, but I’m guessing it becomes trickier to hire when the values of your codebase are not aligned with the values of the broader ecosystem which you’re hiring from, and the further away you are, the trickier it becomes.
It is true for other languages. But a major point of the article was the author’s assertion that one of Go’s great benefits is that you don’t need to do that with Go, where (they say) you absolutely do with the likes of C++ or Java.
That’s why 80+% languages need coding guidelines. Google has one for C++ because hundreds of programmers couldn’t effectively work on shared C++ codebase if there was no restriction on what features any individual programmer could use. Google’s C++ style guide exists to lower C++ from 95% language to 90% language.
(“80+% languages” means “languages that are more complicated or featureful than Go” at that point in the article.)
When I needed tagged unions in a couple of Go programs, I simply resorted to using the closed interface-type trick.
It was a bit verbose to set up, but was quite acceptable in use.
I dislike that a lot, firstly because marshaling and unmarshaling is difficult and also because of the performance implications of the unnecessary allocations. There is a time and place for it, but it’s often better to have a struct with a tag field that tells you how to interpret the other fields. None of the workarounds are bulletproof, however (the structural tagging solution trades fewer allocations for larger space for each item, and neither facilitate exhaustive pattern matching).
in the process of migrating our testing assertions to a small custom library
Shameless plug: https://code.pfad.fr/check/
It is very barebones, because I think that it is better to implement domain-assertion yourself, than trying to cover all cases (for instance I implemented XML-pseudo-diffing myself in https://codeberg.org/pfad.fr/gopim-webdav/src/commit/c712cfe378aa282866ea0be764d25d5694c2c5e2/cmd/demo/main_test.go#L137 and plugged in it check.EqualSlice)
Not being snarky, but I’ve often wondered: why are programs millions of lines of code? What is taking up so much space?
I’m also not being snarky, but I think the best way for you to understand that is to go read chunks of the source to the Linux Kernel, Firefox and LibreOffice. It won’t give you a comprehensive answer, obviously, as there are many more large projects that are large for different reasons, but an exercise like that will make it much less mysterious.
Go’s aim of simplicity is not the problem. My problem with it is twofold:
First, even with simplicity as a goal, it’s possible to do better (e.g. zero value initialization has caused more bugs than I can count, and leads to more complex code to get explicit initialization)
Second, the simplicity is for the language designer and not the user. For instance take pointers. If I see a pointer in Go code, it could mean:
Or any combination of the above. The only way to find out which is to jump into the code, and since function calls are pretty much the only abstraction tool in Go, that means diving deep in a call stack usually. The language is simpler for not having optionals, mutability etc, but those are still concepts that I need to understand and keep in mind except I don’t get any help from the compiler. Go is a very easy language to learn, but making changes to a Go codebase can be extremely difficult because of all the hidden invariants.
And this is just scratching the surface. Go is full of happy paths that seem like genius simplicity in isolation but quickly fall apart as soon as you diverge from the simple case. I’ll do another one on zero values: they seem to work very well with slices. len(nilSlice) == 0
and ranging over a nil slice does nothing. append
also works with nil slices, so you can do something like:
lookup := map[string][]string{}
for ... {
lookup[x] = append(lookup[x], y)
}
No need to initialize an empty list first! This is one of my favorite patterns in Go, and seems to vindicate zero values / nil punning. But this concept doesn’t extend very far: the zero value for maps is nil, and it can’t be manipulated in the same way. And this without going into the difference between a nil slice and an empty slice, or a default zero value map value vs a map value explicitly set to the zero value.
Go is simple to design, not to use.
[1] I do this because a nil pointer panic is usually preferable to proceeding with a zero value filled struct, e.g. accidentally initializing a slice with n
zero values instead of capacity n
.
What’s the saying? “everyone driving faster than me is a crazy street-racer, and everyone driving slower is an inconsiderate clod”
Go is the most hated programming language.
Is this satire ? I don’t think it’s even in the top 3, anybody’s top 3.
Is this satire ? I don’t think it’s even in the top 3, anybody’s top 3.
It definitely seems like the language people like to complain about the most loudly. 🤷♂️
I think people tend to complain more about things they almost love, than things they don’t like at all. I complain a lot more about Go than Java, and there’s a lot I like about Go, while I mostly just want to forget about my time with Java. That Go gets a lot right is exactly what makes its pain points sting so much.
I think that explains metered criticisms of Go, but I don’t think it explains the “Go is trash because of error handling” takes. I guess it’s possible that someone could be acting like they hate Go even though it’s almost perfect for them, but I can’t meaningfully distinguish between that and genuine hatred, so I take them at their word.
The most important question is how many decades of programming language research you should claim Go ignored.
Is this satire ? I don’t think it’s even in the top 3, anybody’s top 3.
No, it’s worse than satire. Satire serves a purpose, often holding the powerful to some measure of account.
This article isn’t satire; it’s just whiny. But it’s backed up by solid numbers, such as 80/20 versus 70/30. And yes, that was sarcasm.
The blog article was pointless. And most of the resulting conversation here hasn’t been much better.
I wish there were more objective measures that we could use to evaluate things like this, and not for the purposes of making something look good, or (just as useless) making something look bad. As an engineer, I’d like to better understand the cost of the decisions that Rob et al made, and to understand how well they actually met the goals that their decisions were meant to attain. These are truly brilliant people who designed and built Go, and I’d love to have a bird’s eye view of how well their foresight translated into reality.
The problem is that we don’t have A/B testing on these things. Instead we have “here are 470,184 different engineering decisions that we made, supported by these big names, with Google dollars and heft behind it, and now you can see the result”. I’d love to find a way to tease these things apart so we can A/B test individual decisions, e.g. re-run this entire universe with the only change being Go adding support for generics as part of its fundamental design. (Maybe that’s the inverse experiment of the one we’re inadvertently participating in now.)
If 80⁄20 is good, wouldn’t 70⁄30 be even better? No, it wouldn’t. Go has shown that you can have a popular language without enums. I don’t think you could have a popular language without structs. There’s a line below which the language is just not useful enough.
Lua does not have structs, and is more popular than Go.
Funny that even as someone who mostly writes and enjoys “complex” languages, I find myself instead needing to defend the “simpler” languages here. There are lots of useful languages without structs, and you don’t even need to go esoteric to find them.
Go’s worst crimes have nothing to do with being simple, but instead anachronism and poor judgement, like default values. Go buys its simplicity of implementation by offloading all the complexity to its users. I would instead laud languages that are both simple to understand and use. My prime example would be Gleam.
Also Javascript, where the relatively recent class
statement is literally just syntactic sugar over creating and linking hashmaps, and Python where (by default) an instance is a pretty thin layer over a hashmap, and that’s an official part of the object model: https://docs.python.org/3/reference/datamodel.html#object.__dict__
Also “go has shown that you can have a popular language without enums”? Java, Python, and Javascript, all were popular long before they had anything resembling enums (JS still doesn’t have them), and before Go ever existed.
Also “go has shown that you can have a popular language without enums”? Java, Python, and Javascript, all were popular long before they had anything resembling enums (JS still doesn’t have them), and before Go ever existed.
Is it possible that Go, Java, JavaScript, and Python are all popular languages without enums? Why are you framing it like it can only be true for Go or for the others? Is it really mutually exclusive, as you suggest?
They’re commenting on the “go has shown” part, in which case if you accept the concept that some (singular, exclusive) language proved that enums are not necessary for a language to be popular, then it cannot have been Go as others did the same before it.
“Has shown” doesn’t mean exclusivity. If I say, “Here I show that oil floats on water”, I’m not claiming to be the only one to make that proof. If I say JavaScript shows that scripting languages can have decent performance, I’m not implying that Lua is slow.
Go buys its simplicity of implementation by offloading all the complexity to its users.
I don’t think the implementation is particularly simple. Garbage collection is famously something that has essentially zero interface and very complex implementation. The Go scheduler is quite complicated to make Goroutines feel like threads but cheap. (In the terms of A Philosophy of Software Design, they would be “deep” modules.)
I would say Go has “conceptual simplicity” by having relatively few language features and that was an explicit design goal.
Rob Pike has talked about this in “Simplicity is Complicated”. slides
Lua does not have structs, and is more popular than Go.
In Lua tables serve the same purpose.
It’d be difficult to design a language without structs. You’ll have them one way or another, you may just call them something else.
By that definition go has sum types because you can construct them out of structs.
Depends on what you include in your “sum type” definition. E.g. if you want pattern matching/switch etc. expression with exhaustiveness checking, then no, Go doesn’t have sum types by that definition.
If we need special syntactic affordances to count, Lua does not have special syntactic affordances for structs. Lua does not have any affordance to ensure a table passed in to a function has a fixed set of entries, which is the closest analogue I can think of to exhaustiveness checking in a switch.
exhaustiveness checking is not a syntactic affordance. you can’t make exhaustive pattern matching in Go as far as I’m aware (maybe there are some really obscure hoops you could jump through, but nothing that is actually used the way Lua uses tables as structs).
If you want to use tables like structs, you the programmer have to mentally keep track of whether they have the right entries as you pass them around. How is that different from using structs like enums, where you have to mentally keep track of whether you’ve covered all the variants? You can recreate any construct in any Turing-complete language if you try hard enough, the only difference is whether the language makes it easy and gives you nice tools to work with it.
I’m not trying to disprove your analogy between Go structs and enums, I’m narrowly pointing out that Go’s lack of syntax support for sum types is unrelated to exhaustive pattern matching.
You can recreate any construct in any Turing-complete language if you try hard enough, the only difference is whether the language makes it easy and gives you nice tools to work with it.
How do you get to exhaustive pattern matching from Turing completeness?
As far as I understand, the point of having specific syntax for enums is to:
And the point of specific syntax for structs:
These are the same to me, and Lua doesn’t provide these for structs in the same way Go doesn’t provide them for enums
Lua tables are the equivalent to Go maps, not structs. And maps and structs are different enough that even Go, a language with a big focus on orthogonal design, made them two separate language features.
I’m just saying that Lua has a feature that serves the same purpose.
I’m aware that a hash map is not the same thing as a C struct.
I’m just saying that Lua has a feature that serves the same purpose.
Thus proving that structs are redundant if you have hashmaps (which Go does), and you can in fact have a popular language without structs. Which is intarga’s point.
Is Lua popular because of its language features, or because it’s easily embeddable and the only language you can write Roblox games in?
What other reason would a language be popular for, other than its features and its ecosystem? And being embeddable is 100% a language feature. Is Go only popular because it’s easy to onboard juniors in while being unchallenging for old C heads, and it’s the only language you’re allowed to write at a bunch of workplaces?
Sorry, my argument is irrelevant to your original point, and popularity in programming is such a nebulous concept anyway. I agree languages can get on perfectly fine without structs. I just think Lua is in a unique enough situation that it’s hard to draw comparisons to. There are plenty of other well-used languages that don’t have structs.
Lua does not have structs, and is more popular than Go.
Bash and JSON are even stronger counterpoints, although I think the author is talking about general purpose programming languages released recently, as opposed to scripting languages or configuration languages or music languages or markup languages.
I would instead laud languages that are both simple to understand and use. My prime example would be Gleam.
It’s not obvious to me that Go isn’t both simple to understand and use. All of the languages I’ve used that promise to be simple to understand and use haven’t come close. I’ve been meaning to use Gleam, but in my mind, a language has to have enough traction that I can find libraries, hire developers, and pitch it to my organization in order to qualify as “easy to use” in any useful sense, and I don’t think Gleam has proven itself there. Are there any examples that pass industrial muster? In particular, I’m looking for a language that is decently fast (>= 10x faster than JS) with native compilation.
I think the author is talking about general purpose programming languages
Lua is a general purpose language though? There are plenty of large projects written in Lua, and it typically leads in performance among interpreted/JIT languages, even beating out AOT compiled ones in some benchmarks.
pass industrial muster … decently fast
It sounds like you’re looking for Go, so no shame in using it, it’s a good language in many ways.
I can make a case for why Gleam is interesting though:
Personally I use almost exclusively Rust for work. It has the right mix of performance and ergonomics for my use cases, but this conversation was about simple languages, and Rust certainly isn’t that.
In practice, though, aren’t Lua tables (and Javascript objects) used in essentially the same way as structs?
Is the defining characteristic of a struct that its named fields can be accessed in constant time, where a table, object or dictionary requires potentially more complicated operations to find before accessing? What matters isn’t necessarily the name, but the abstraction and the runtime and compile time characteristics of it.
The more I use Common Lisp, the more fuzzy some of the differences between different abstractions become.
As far as the article’s arguments are concerned, I don’t think there’s any language feature that can’t be worked around if it’s missing. I doubt it even affects language popularity very much if there are other forces pushing for its use - like having a big name behind it (Go) or being exclusive to a niche (Javascript).
Swift is a cautionary tale here. Despite over 10 years of development by very smart people with practically unlimited budget, on a project that is a priority for Apple, Swift compiler is still slow, crashy and is not meaningfully cross platform.
They designed a language that they cannot implement properly.
To be fair, I imagine some of Swift’s more awkward aspects stem from the need to work so cooperatively with objective-c and the reams of objective-c code that Apple had (still has?). Without that, I’m sure Swift would have been a bit “cleaner”, but I doubt it would have had as much uptake/success as it has.
Is it true that the compiler is slow, crashy and not meaninfully cross-platform? I don’t have first-hand experience with Swift, but my understanding was that it is rather solid. And that you can use it to build backend things for Linux, although there isn’t a massive ecosystem for doing so yet.
“Every single bad design decision will be explained as an intended tradeoff” – Go creators, probably.
C#, Swift, Rust - they all seem on a never-ending treadmill of adding features.
Many features cannot be added to Go at this point, because of the shortcuts that have been taken. Either because it is literally impossible without moving to a major backwards incompatible version, or because if it would be backwards compatible then it would be so optional that it would have no real impact.
If 80⁄20 is good, wouldn’t 70⁄30 be even better? No, it wouldn’t. Go has shown that you can have a popular language without enums. I don’t think you could have a popular language without structs. There’s a line below which the language is just not useful enough.
Popularity as a metric for PLs can be very misleading. On one hand, there’s an undeniable “utility” or “usefulness” implied - if everybody uses it, it means it is in fact useful and solving real world problems right now. It’s tempting and easy to stop here, we can measure popularity in numbers. Other PLs can be dismissed on the grounds of being nice trys, great ideas dreamed up by deranged math nerds in their ivory towers, that in practice don’t solve our real world problems, neither now, nor tomorrow when the popular PL has already solved the problems of yesterday.
Here is a talk from Evan Czaplicki, creator of the Elm language, on popularity, and how important the Marketing Machines behind PLs are: https://www.youtube.com/watch?v=XZ3w_jec1v8 Elm is a complete flop from the popularity perspective, but it did achieve a lot of other things, precisely because Evan refused to take shortcuts to popularity.
Many features cannot be added to Go at this point, because of the shortcuts that have been taken. Either because it is literally impossible without moving to a major backwards incompatible version, or because if it would be backwards compatible then it would be so optional that it would have no real impact.
Not that I think you’re wrong but can you give some examples? I am definitely interested to know some of those design mistakes that are being blockers on the evolution of the language.
Apparently generics on methods (receiver functions) interact poorly with interfaces, so cannot be added.
Popularity as a metric for PLs can be very misleading.
I agree with that, I think if you look at popularity ratings for languages, you will see a lot of objectively terrible languages at the top. You don’t choose languages in a vacuum, you use them because management tells you to, because there is legacy software or libraries you need to use, and because it is cheap to get developers for them. That is also why it takes a long time for new ideas to be adapted in the programming language space.
If 80⁄20 is good, wouldn’t 70⁄30 be even better?
Less utility with more complexity is definitely worse but I am guessing the author meant something like 70/15.
I feel like the reasoning here fails (or just tries to pass off a pretty picture) because it tries to pass off different types of complexity as the same.
There’s the complexity of the language itself, and then the complexity of a code base written in the language. Go may be a 20 on the language complexity scale. That’s fine. Any small block of code might be pretty straightforward. The codebase as a whole though, is like a 140. You get lots of duplication, or things that would be clearer if the language offered a bit more.
I think that’s why go shines as a small tools / small project lang and it falls apart as it gets larger. A lot of people would trade a little in column b to pay down column c in the 80 / 20 / 140 equation so that it scaled better, and that just seems unpopular with the go team. Which is fine. Languages don’t have to be everything to everyone. But it isn’t this simple equation described here.
That’s an interesting observation because making it easier to write “bigger” software was part of the design goals
[the absence of generics] served Go well for over a decade.
Only because we knew that Google backs the language. A random Foolang wouldn’t make it adding virtually nothing to the landscape. We all thought back then “wow! for the first release it’s great! keep it coming”. Esp. building a great stdlib on top of this minimalistic 80/20 design.
A calmly written article, thank you. While I’d personally say that Go didn’t go far enough with respect to enums (and definitely error handling!) it otherwise tries to focus on simplicity and that is laudable. Note that some languages conflate “simple” and “easy”, and while Go has not sinned as deeply as Python has here, it is not guiltless. Lua (and its descendent Fennel) is another language that does well in straddling the 80/20 line.
Most languages can’t resist driving towards 100% design at 400% the cost.
I liked this insight, and I’m being more and more convinced that using “done” languages is a legitimate way forward. Barring hardware changes, it becomes much harder for code to rot if the language itself is already static. It’s something like:
It logically makes sense though. Because it’s putting all that effort once into the language codebase and then every code base benefits.
The same is similarly true for the standard library as well - funnily, no one disagrees here, because the level of abstraction is lower, so it’s easier to understand I suppose.
What is the benefit of vendoring dependencies compared to using a package manager? Package managers don’t force you to update libraries, right? They just make it easy for you in case you want to stay up to date with security patches (I do).
Go has shown that you can have a popular language without enums.
I would’ve preferred true enums over generics, if I had to pick. To be honest.