Were multiple return values Go's biggest mistake?
38 points by mond
38 points by mond
On the one hand, the author offers several specific cases where tuples would simplify and improve Go. For example, it would be great to replace custom (and varied) Result
struct types with (value, error)
tuples. Also, this section really struck home for me.
Go cannot handle the case of iterating over one or two values in a uniform, parametrized way. It requires duplicate definitions, one for handling one value at a time, and one for two values.
for val := range foo { ... } // you can do this
for k, v := range foo2 { ... } // this requires a different api
for x1, x2, x3 := range foo3 { ... } // this is literally impossible in Go
Why?
On the other hand, Go continues to bring out the worst in people.
I don’t want to sound mean, but a friend of mine has suggested that he believes that “Rob Pike invented Go as a practical joke.”, and things like this really make me wonder if he has a point. Why would you design a language where the result of a function call cannot be stored or passed around?
My serious suggestion: if you find yourself writing “I don’t want to sound mean,” then immediately stop and delete the sentence. Just don’t say it. It almost never helps. It’s almost never as funny or clever or insightful as you think it is.
Unrelated, but small typo: s/fool’s errant/fool’s errand/
I’m always amazed how Ken Thompson, Rob Pike and Robert Griesemer, with a combined 100+ years experience with PLT and about a dozen languages, are treated as total idiots by people whose greatest hits are building a web app once.
I agree we don’t need to be rude to people, but this is just an argument from authority. It’s alright to disagree with people who you believe are wrong, even and perhaps especially if they have been doing it for a long time and have obtained power and status while doing so!
I don’t know if it’s an argument from authority to suggest that expert opinions shouldn’t be dismissed out of hand?
Technically speaking “X’s argument shouldn’t be dismissed because they’re an expert.” is a core example of an argument from authority is, yes.
The “out of hand” part of your sentence is doing a lot of work, though.
Yes, “out of hand” is load bearing—if you remove that phrase it changes the original meaning. Choosing my words carefully isn’t exactly rhetorical sleight of hand as your “doing a lot of work“ comment suggests.
GP’s comment seems to suggest that you didn’t demonstrate anything being done “out of hand”. Hence it is basically an enormous assumption packed into a few words. Hence “doing a lot of work”.
Expert opinions aren’t being dismissed, let alone “dismissed out of hand” — expert opinions (in form of the result of their work) are being criticized (with arguments) and contrasted (in the form of a snark, yes) with what could be expected of an alleged expert.
On the other hand, @dlisboa’s comment seems to essentially say “thou shalt not criticize Rob Pike’s work, for he has $many years of language design experience and you do not”. Which is, ironically, a form of argument dismissal.
My “out of hand” was a summarization of “are treated as total idiots by people whose greatest hits are building a web app once”.
If dlisboa was arguing that people shouldn’t disagree with Pike, an authority, then he would be making an argument from authority. He didn’t argue as such, however—instead the argument was that we shouldn’t treat experts like idiots, which does not seem to be an argument from authority (and if it is, then I think “thou shalt not argue from authority” is not a very useful commandment).
Surely “doing a lot of work” and “load bearing” mean roughly the same thing?
Load bearing means you can’t separate the phrase from its original claim without completely changing the meaning of the original claim. The parent was trying to remove the critical phrase in order to more easily rebut the original claim, but in doing so he completely changed the meaning of the original claim i.e., a straw man argument.
They are absolutely giants of CS, but I don’t think their work on PLT is the pinnacle of their lives.
C is arguably also not a particularly well-designed language, even at the time of its creation there were plenty of better design choices readily available (e.g. pascal’s strings).
I don’t have to be a great chef to be able to taste that this food has way too much salt in it. And I believe the beauty of PL design is that it’s not just a CS problem like an algorithm. It requires understanding the human element, as it is as much a language for humans as much it is to computers.
even at the time of its creation there were plenty of better design choices readily available (e.g. pascal’s strings).
Are you sure that a hard limit of 255 bytes is a better choice? I have longer file names on my system.
You can use more bytes for the length, though, and still end up with way better performance, even more so on modern hardware.
Also, there are plenty of ways to further optimize this basic setup, e.g. c++/rust can do stuff like store short strings inline.
I’m quite frustrated that people keep describing Pascal strings wrong, saying they’re a good design like modern slices are, and they very much are not.
c-string
[ ptr ] -> | a | b | c | d | 0 |
p-string
[ ptr ] -> | 4 | a | b | c | d |
slice
[ ptr, 4 ] -> | a | b | c | d |
You cannot subdivide Pascal strings, you must create new allocations. C strings let you swap out separator bytes with nulls, which isn’t ideal but at least lets you subdivide a string. Of course, modern slices are just better as we’ve all experienced.
Thanks for the correction. Though they are still better from a “won’t read far past the end of the string” and a “won’t have to duplicate half of the standard lib to have ‘safe’ functions that also pass a length” perspective.
Pascal as it was defined in the 1970s did not have variable-length strings except as non-standard proprietary extensions.
I guess tangential to the post, but while I truly dislike appeals to authority, you’re 100% right.
A massive issue with software engineers in the general is what I think of as “why didn’t you just do this?” syndrome. Start from the idea you are smart (maybe you are, maybe you aren’t), come up with something you think is “simple”, then assume other people aren’t smart because they didn’t do the “simple” thing. I find it far too rare to see software engineers start from a good faith position of “you are probably smart and thought about this, I am wondering about background. Maybe knowing what we know now we can try something else?” Because 9 times out of 10, they did think about it.
It’s exhausting and I feel like I’ve bought enough social capital at work to just call people out whenever I see it. Unfortunately, I see it a lot.
(I don’t know why Go seems to attract so much more of this prognostication than other languages… it’s almost like the deliberate simplicity invites discussion because it’s easier than say… reworking Java streams or something)
A massive issue…is what I think of as “why didn’t you just do this?” syndrome. Start from the idea you are smart (maybe you are, maybe you aren’t), come up with something you think is “simple”, then assume other people aren’t smart because they didn’t do the “simple” thing. I find it far too rare to see software engineers start from a good faith position of “you are probably smart and thought about this, I am wondering about background. Maybe knowing what we know now we can try something else?” Because 9 times out of 10, they did think about it.
I wish I could upvote this a thousand times.
I made this exact point a couple of days ago: https://lobste.rs/s/gvgmth/choosing_languages#c_poq5vw
This whole glorification of Go designers is just silly.
C&Unix turned out successful, but it had nothing to do with how well/bad it was designed. Utf-8 is just a simple bit twiddling problem.
I have respect for the old guard, because these were different times, different constrains, and hindsight is 20/20, but really it’s hard to overlook that their fame is based mostly on the accident of C/Unix success.
combined 100+ years experience with PLT
Especially KT and RP were just hacking C stuff their whole life. Nothing wrong with it, lots of competent and good work. But pretending like they have some unimaginable insight and breath of knowledge in PLT that contradicts how poorly designed almost everything in Go is … come on. Go is just a C with green threads and GC. Nothing about it, absolutely nothing, suggests any deep PLT insights. It’s just the same “hacking C stuff” as before.
I think the trick with UTF-8 was articulating the requirements. Evidently the designers of UTF-1 got that part wrong.
And yet, here we are. Go’s design ignores sooo much of the PLT research community and gave us a “memory safe” C with structs that can have methods, special types that can be iterated, closures, almost but not CSP, and duck typed interfaces (which are actually pretty good).
It took over 10 years to introduce generics, after being gaslit the entire time about needing them, and even longer for the language to unlock the key to the glorious for-range loop.
Lots of experience, and somehow, lots of success despite being a C+ language.
Lots of experience, and somehow, lots of success despite being a C+ language.
Maybe, just maybe, they knew what they were doing.
Honestly, I would really like to know how successful would Go have been if exactly the same language had come out with unknown authors, outside the umbrella of a company with a reputation of technical excellence and a lot of people who want to imitate them.
And I’m not implying I’m sure it would fail. It’s impossible to know, but I’d love to know. I think almost everyone thinks that some works are only popular because of the marketing behind them (I think there are examples of people who have actively avoided using their fame to promote their works, by using pseudonyms and the like. It would be interesting to study this, although I think it’s impossible to get good data)- if you think that some stuff is mostly marketing, then maybe Go is mostly marketing.
I think Go does some things better than any other reasonable alternatives, like creating binaries that run anywhere, fast compilation times, using tabs, standardizing gofmt… My personal experiences with Go, however, have been:
I worked in a small startup that created some OSS libraries in Go that did cool things and managed to get a community of contributors that have outlived the startup. But when I had to fiddle with the code, just by what little I learned there, I had the intuition it might have some significant bugs- and I later found some of those bugs.
In that company, and my brief stints trying to learn Go… I never felt it clicked for me like other languages I like have liked clicked for me quickly.
Some people I know to be extremely capable, swear by Go. But I have tried and tried, and I just don’t get it (outside the narrow things that I like!)
edit: I’ll add: and I fully agree with some other poster that mentioned that even geniuses make mistakes that normies like me can see, from time to time. (OTOH, for example, I understand with the TypeScript guys went with Go, for example.)
I think Go does some things better than any other reasonable alternatives, like creating binaries that run anywhere, fast compilation times, using tabs, standardizing gofmt…
“Fast compilation times” and “tabs”, are the only of those claims that are “language design” and compilation time is more a “we chopped things that don’t have O(log n) algorithms”, effectively.
The rest is end user experience. The end user experience of Go is decent, but so too could any other language if that was an up front consideration.
The rest is end user experience. The end user experience of Go is decent, but so too could any other language if that was an up front consideration.
It’s not much of a consolation to me, an end user, if a language offers a poor UX because they failed to prioritize it or for some other reason (maybe we agree here? I can’t entirely tell).
I also not sure it’s fair to imply that other languages aren’t prioritizing UX—I think Rust maintainers and enthusiasts feel that Rust is prioritizing the UX by optimizing aggressively for performance and correctness. I think UX depends on who your users are, what they want to do, and what values they hold—I don’t think many languages deprioritize UX per se (even if I agree with you that Go has a decent UX from my perspective)?
apg did not say that programming languages fail to “prioritize” ux at all but that they don’t make it a up front consideration and instead try to resolve issues as they crop up… Something rust has very famously suffered from
Honestly, I would really like to know how successful would Go have been if exactly the same language had come out with unknown authors, outside the umbrella of a company with a reputation of technical excellence and a lot of people who want to imitate them.
Given how Plan 9 and its language ecosystem (Plan 9 C, Alef, Limbo, …) went with the same authors under the faded banner of Bell Labs, I strongly suspect Go’s biggest strength is that it was (and is) a Google jam.
Well, I don’t know, someone else argued that Dart is also from Google, but largely hasn’t really achieved critical mass, IMHO.
Perhaps it’s the conjunction of the two things, perhaps Go is more than what we “haters” think.
Necessary but not sufficient.
There’s a lot of failed languages out of Google. Most tech giants have a failed language or two. But contrast them to the languages that achieved critical mass without bigcorp sponsorship. (Perl, Python, PHP…)
I wonder if Dart could be an indication regarding popularity/success.
Dart is certainly a very interesting data point. It’s also from Google. I don’t think the Dart creators have the prestige of Pike and Thompson, but it has the Google clout.
I kinda like Dart, and my experiments with it have been much more successful than my Go escapades.
But you get quite a lot of anti-Dart sentiment online, when it’s not just that people have not even heard about it.
So well, perhaps contrary to my perception of Dart being more interesting than Go, Go is “objectively better”. As I mentioned, I know people I personally know to be good to love Go, so I always consider that I can be wrong about Go.
Always good to keep an open mind.
However I do wonder if there’s a general misconception of languages being good or wrong. And that might also stem from how things like PLT research and research as large is sometimes viewed. Novel concepts and studies are and should be limited in scope. They look at the one thing that is resesrched which is the right thing to do, because there are usually already so many traps to fall into. Now the misconception is that there is that one straight line there that you can put programming languages in from bad to good or from “when we didn’t know better” to the more enlightened ones.
I think that’s a bit silly. Programming languages are at large trying to implement ideas of its creator(s). They are really abstract ways of creating ways to express oneself. And the “really abstract “ especially nowadays already starts at the assembly language level. The code that one writes there hasn’t been the precise “physical “ (but after abstractions to the logical/digital world) description of that you type n. There are optimization layers, compatibility layers, etc. And since we are in the digital world logic - that is bare bones logic - is the only limit. And that logic doesn’t have to be sensible. There’s nothing preventing you from every second true meaning falsw, or really anything else.
At the same time there are the obvious things like how JavaScript was turned into something it was never been designed to do, how there have been countless hours, work and brainpower invented into making it pretty fast and do on
All of that appears to lead to a state where a lot of factors like how the author perceives the programming language and programming at large, what subjective things they like, as well as just general trends that are almost fashion trends dominate the landscape quite a bit. There was a time where how good a programming language is was seen by many as how close it is to human/natural language. Nowadays it is largely frowned upon and precision in the name of clarity is way more important.
But it would be easy if it was just one mind in one point in time decides the shape of a programming language. Instead it’s often communities or at least companies deciding on that and I think this long term is a big reason for people jumping to younger programming languages, because there for the lack of history less baggage, mind changes, trend changes, ideas that turned out to be bad made it in. It will take way less mental capacity to understand code if it appears consistent.
Now couple that with the fact that the minds of developers (language users) are also not that consistent simple thing without history, experiences, intersterests, understanding and misunderstanding.
All of this to me sounds like the idea of objectifying good and bad languages and even language design is at best extremely hard if not Impossible - at a scale of a large real life language. Even statements like “language X is good/bad for Y” often is a stretch when not going into the very specifics.
So I do think a language can easily be better for a certain person for a certain project for a certain time with all the specifics of the language, the developers background and of course the project. And it’s even hard to measure for the developers themselves, because for example using another language to rewrite a project means that they understand the problem scope better, have a clearer picture of things, etc.. Rewriting something often leads to better results no matter if it’s the same language or not because you have knowledge to build upon. I think developers often step into the misunderstanding that when they use a new language, framework, etc. and realize it’s way smoother it can be many reasons. Often is even unrelated stuff such as the freedom to break compatibility.
So what I’m trying to say is that I think the reason why someone does excellent work on a language we consider absolute garbage is simply the results of all these variables aligning better with them
There is a reason why there are so many languages. And rarely they are crested by idiots. It’s that they don’t find something that aligns with their wants. Of course it’s more complicated than that, but I think judging languages at large is really hard, even with subsets and individual features there’s much disagreement, even among the best language designers there are no matter whom you pick for that.
There’s certainly different mindsets. Even though apparently there are many studies that say that statically typed languages are no big deal, I definitely consider ME more productive with statically-typed languages. (Although I find some defenses of dynamic typing pretty convincing.)
I can certainly say there’s an intangible pleasantness with languages. I find it very pleasant to write bash, Python, SQL, Java, Haskell, Rust, Prolog… and I find it very unpleasant to write PHP, C#, Go, Pascal, Lisp. And some of this doesn’t just add up; the naming conventions in C# throw me off so much because of how different they are from Java, but really C# is a much nicer Java. I love the ideas of Lisp, but the syntax never works for me, and the surrounding advocacy throws me off (just like a lot of people are put off by Rust advocacy).
The “bad” thing is that I think ecosystems are more or less a zero-sum game. I don’t think there can be so many languages with good ecosystems. I like Dart, but I know if I pick it instead of Rust/Python/Java… I’m much more likely to have to implement more stuff.
(Python is nearly perfect to my eyes. It lacks good static typing, it’s slow, and distributing Python code is not great. But really although those look like big flaws, they are fairly harmless in most scenarios I live in. But if I want static typing… Java, Rust, and Haskell each one has at least one grave flaw. I think we need more statically typed programming languages.)
Orr… maybe they had the pedigree of Google and a giant ass batteries included standard library designed to get meaningful work done right away.
Look, despite its heaps of problems, Go is clearly successful. But its successful in a “Worse is Better” sense, not a “this language has can produce beautiful pure abstractions and is the pinnacle of PLT!”
Go is clearly successful. But its successful in a “Worse is Better” sense, not a “this language has can produce beautiful pure abstractions and is the pinnacle of PLT!”
What you appear to be saying here is that caring about theory is not connected to better practice. I would push back on that, and suggest that the point of theory is better practice. If it doesn’t lead to better practice it is bad theory.
I think the entire point of Worse Is Better is that “if it doesn’t lead to better practice it is bad theory” is unreliable—languages can succeed for reasons unrelated to their technical merits.
That said, I’ve found Go to be tremendously productive for many use cases—far more so than more “theoretically pure” languages. I don’t know if I would say it’s because the theory is bad, but certainly the theory seems incomplete. Specifically, theory fixated on type system expressiveness and maximizing abstraction, but it largely seems to ignore tooling, ecosystem, documentation and even the culture of the community—it even fails to account for the costs of abstraction (more is not always better).
Even static analysis can go too far—I can write a lot of SaaS code and cover it with tests and even iterate on early feedback in Go in the time it takes me to pacify Rust. My Go code might even leak some bug in some edge case that inconveniences a couple users. But does is it preferable to slow production and have a longer user-feedback loop to avoid those bugs? Maybe if the stakes are super high, but for most code in most apps that’s not the case…
The point, for you. Not everyone doing PLT is specifically doing it to make your life easier, some actually just enjoy doing it for its own sake, like any domain of pure math. however, completely disregarding pure math while in the world of practical math will always be a fool’s errand.
Division of labor does not negate the interrelation of the labor. This should be intuitive in a discipline as highly socialized as software engineering.
the point of theory is better practice. If it doesn’t lead to better practice it is bad theory.
Does this claim really stand up to evidence?
It’s ironic that you ask for empirical verification of the value of theory, because practice is how you do that.
But tongue withdrawn from cheek, it’s important to understand that you fundamentally cannot sever theory and practice as separate disciplines, you can only do one of them badly.
Christian Thorne provides a wonderfully concise explanation of this in The Dialectic of Counter-Enlightenment:
When you say you are “theorizing” or “doing theory,” what do you take yourself not to be doing? Here the hallowed distinction is between “theory” and “practice”— between mere thought and concrete action in the material world. But this pair actually fails to answer the question as posed, since when I say that I am theorizing, I don’t usually mean that I am not engaged in practice.
On the contrary, theory and practice, far from being simple opposites, are antitheses and in that sense necessary companions. Theory is thought that knows itself to be tethered, to be already practice (or a prelude to practice or practice’s underpinning).
Ousterhout demonstrates the dialectical nature of theory and practice in his preface to A Philosophy of Software Design:
I have now taught the software design class three times, and this book is based on the design principles that emerged from the class. The principles are fairly high level and border on the philosophical (“Define errors out of existence”), so it is hard for students to understand the ideas in the abstract. Students learn best by writing code, making mistakes, and then seeing how their mistakes and the subsequent fixes relate to the principles.
At this point you may well be wondering: what makes me think I know all the answers about software design? […] Over my career I have written about 250,000 lines of code in a variety of languages. I’ve worked on teams that created three operating systems from scratch, multiple file and storage systems, infrastructure tools such as debuggers, build systems, and GUI toolkits, a scripting language, and interactive editors for text, drawings, presentations, and integrated circuits. Along the way I’ve experienced firsthand the problems of large systems and experimented with various design techniques.
Out of all of this experience, I’ve tried to extract common threads, both about mistakes to avoid and techniques to use. This book is a reflection of my experiences: every problem described here is one that I have experienced personally, and every suggested technique is one that I have used successfully in my own coding.
Theory and practice are co-constitutive of one another and both are necessary to master any domain, or improve the state of the art.
I think you misunderstood my question. I’m not asking for evidence for the value of theory, I’m asking for evidence for the claim that “the point of theory is [to result in] better practice”, and transitively the claim that theory which doesn’t lead to better practice is bad theory.
While I personally 100% agree with you that “you fundamentally cannot sever theory and practice as separate disciplines”, I think this is a subjective position, and in fact a super minority opinion. I think the majority of folks in the PL theory space evaluate value completely invariant to any notion or consideration of practice. Do you see things differently?
To be more precise: without theory, there is no practice. When we take action, we do so with intentions about what that action will accomplish. The understanding underlying our choice of action is theory. The result of the action becomes new theory.
This holds whether we are making an engineering choice or kicking a rock. Furthermore, when working collaboratively on a software system with other people, we must be capable of describing our decision process to others. At this point theory develops from unconscious intuition into social theory-building.
Theory-buildling is hard work, and is very complex. Discovering it all yourself is extremely time-consuming. So it behooves us to rely on the benefit of shared knowledge to accelerate this process.
I’ll give you a concrete example: you can roll your own crypto. You can just sit down and bang out an implementation of the RSA algorithm in not that much time.
Why would it be unwise to do this? Because, without engaging with the wider world of cryptography research, you don’t know what you don’t know, and thus will very likely make avoidable mistakes.
This is the criticism of language designers who don’t engage with the wider theory of programming languages: avoidable mistakes. Like zero values, or tuples that can’t be expressed in the type system.
It’s different perspectives. Sometimes you want exactly that. Sometimes it’s the other way around.
I think that the definition of “PLT” might be getting in the way of your argument. The state of the art in PLT might be relevant to better practices in twenty years’ time*, but I struggle to see it relevant to software engineers today (even those few who are writing Haskell). The theory-in-service-of-practice sounds more like HCI research.
As an example of what is generally agreed upon as PLT, you can read the proceedings of POPL 2025.
* e.g. as I am told, Rust’s affine type system is decades (plural) old.
Are you genuinely interested in arguing the point that Go’s many decisions (e.g. lack of nullability in the type system, the whole multiple return value thing, lack of enums or sum types) were integral to Go’s success, or are just here to imply that this point could be made?
Not that interested because this point has been made before by multiple people, including the creators of Go. They’ve carefuly explained why they made their decisions; people just tend to disagree, or worse, believe they’re beginners at this and just never considered anything else.
Go is trying to be a supremely practical language, it’s not trying to solve any major abstraction problem in PLT – except concurrency. There’s a lot of value in that.
History teaches us that many of the trappings we like to read about in PLT papers are not that consequential in day-to-day use. Go kept the status quo and improved in some way upon some of the major languages mostly around practical and not theoretical boundaries. Things like “it compiles in 1 second” rather than “it borrows concepts from Category Theory”.
Look, I completely agree with the goal of being practical. So do many, many of the people who’re criticizing Go.
What I (and many others) think is that Go isn’t actually doing that great of a job: It’s not as simple, nor as practical as it could be. See features such as iota, complex numbers, named return values, certain types of nullability or (to bring this back to the article) lack of tuples, which means that error handling and channels don’t really compose.
It’s fine to criticize that and, as far as I was able to tell judging by the resources I gathered, many of these decisions were in fact not particularly well thought-out, and there is no strong argument why these decisions were necessary. If you have some genuine sources getting into these design considerations, especially by the Go developers themselves, I’d love to hear them.
The reality is that there’s a difference between being practical in how you approach the design process as a whole, and deeply designing something to be practical. It’s obvious to me that the Go developers did the first, and not the latter, which is why I find discussions like this frustrating. The reality is that the Go devs were practical, and this practicality came with “We’ll ignore all of these hard questions and go for something trusted that sort of works.”
Conflating these two things is exactly what people are frustrated by.
Anyway, I don’t think we even particularly disagree—just venting a little here, really.
I disagree here and that’s kinda the point.
What I disagree with is that iotas and lack of tuples are bad ideas - no, they’re very practical. They might make you do some boilerplating, but not nearly as much as other languages. But once you have these, e.g. multiple return values, it’s a very simple and practical language to work with.
And that’s the point of the practicality argument. Multiple return values or lack of enums are not integral to go’s success. But they’re practical. I see people arguing about getting or not getting go. I believe I get it.
Maybe I’m stupid and can’t understand some of the “problems”, you and other commenters make, and the tuple thing that the guy in the original post made. But I can’t understand them.
Can you elaborate how “lack of tuples” is practical? Tuples are essentially just anonymous structs. Adding them to the language would be (almost entirely) backwards compatible, and all of the same (practical) syntax which you already use would still be available. It’s about slightly enhancing multiple return values, if anything.
Hmm. You made me think, so thanks :)
Anyway, the way I thought it through now is, the multiple return values are a very practical choice. I even said “they might make you do some boilerplating”. So to turn them into tuples, to add this bit of syntactic sugar is just that, syntactic sugar. But it would then bring the problem of, say, ignoring certain return values, back to the front - which the lack of tuples neatly removes.
For example, if your doStuff returns (string, error)
(no tuples), you can result, _ := doStuff()
. Yes, I ignored the error, but I know there is something ignored.
But if we can now wrap that into a tuple, what then? resultTuple := doStuff()
, I don’t even know what is there.
That’s different then struct, in my mind. If someone went to the trouble to make a struct (your Option/Result example from the blog post), then you have a struct. I don’t know what would tuples bring to the entire story.
Well, it most definitely hasn’t solved concurrency, and besides some QOL features, I would even question it keeping the status quo. It’s a step back on too many counts.
Go may not have solved concurrency, but I can say, as someone who was writing highly-concurrent code around the time of Go’s release, that it was an enormous improvement over the status quo, leaps and bounds better than all other practical languages at the time.
Erlang and Haskell already had green threads, message passing was decades older than Go.
And Java had (and has) better concurrency libraries out of the box, even though it hasn’t had virtual threads back at the time.
Yeah, through a bit of serendipity I discovered Go at about the same time I was really reading and appreciating Hoare’s CSP paper. Was absolutely delighted to find that there was a modern language adopting those semantics because it felt like a Gosh Darned Good Idea and I’d been disappointed that I’d never come across it in a mainstream language before.
It doesn’t ignore the PLT community, it’s part of it. Pike, Thompson, Winterbottom are all PLT researchers of the first caliber. They designed multiple languages over the years at Bell Labs specifically designed to solve their problems.
Go has a very long pedigree: it’s what happens when Alef/Limbo/Newsqueak has a baby with Oberon-2. Saying it ignores the PLT community is saying those people are somehow not part of the PLT community…
It doesn’t ignore the PLT community, it’s part of it.
All of my little PL experiments are also “part of it.” That doesn’t make them good languages.
it’s what happens when Alef/Limbo/Newsqueak has a baby with Oberon-2.
Oberon-2 is the only one of those languages with any “pedigree” outside of Bell Labs fan boyism. And Oberon-2 lives on, basically, because Go was influenced by it. It’s basically a leaf node in the tree of PLT influence.
Paying attention to the PLT community also doesn’t make a language good. What’s your point?
Not paying attention to the PLT community as a programming language designer leads to the same basic “derived from C” semantics with pretty much all the same problems.
Panic due to nil pointers are just as common as NullPointerExceptions in Java. And the only advantage over C… 50+ years later, is they don’t corrupt memory. Not nothing, but kind of embarrassing. And the PLT community has come up with lots of ways to avoid this!
And not everyone in the PLT community agrees that the ways to avoid it are worth the tradeoffs.
It’s kind of ironic, maybe, then that their avoidance of a Result type lead to multiple return values, which has to be special cased all over the place.
Or that, the presence of a Result type that forces you to check the error (which people write linters for, instead), could also be reusable as a Maybe/Optional which is a fairly basic mechanism to avoid null pointer dereferencing, too.
shrug
Personally I weight programming language practice more highly than programming language theory. From the arguments on this site and others you would think that successful software cannot be built in a language without a sophisticated type system, but a lot of people find that Go is a great language to produce software in, presumably because the type system isn’t as important as having great tooling or a simple language or a solid ecosystem or good documentation or decent performance, effortless static compilation, familiar syntax etc etc.
An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning: another instance of the fact that the future lies with the youth.
— Max Planck
My serious suggestion: if you find yourself writing “I don’t want to sound mean,” then immediately stop and delete the sentence. Just don’t say it. It almost never helps. It’s almost never as funny or clever or insightful as you think it is.
I don’t know. I don’t think we should be berating every random project we come across, but there is a point where I think the harm inflicted by the thing being criticised is far greater than that of the harshness of the criticism.
I don’t know. I don’t think we should be berating every random project we come across, but there is a point where I think the harm inflicted by the thing being criticised is far greater than that of the harshness of the criticism.
I think people can disagree—and even criticize—without being mean. In this case, for example, the author can say about 90% of what the blog post has and simply remove the snark.
the author can <…> simply remove the snark
But why anyone should do it? Snark is okay. We are humans, we have emotions, snark is one of the means of conveying emotion. This is not some kind of ultra-pasteurized “professional” setting that must be devoid of all humanity to please the gods of corporate corporatism, this is a blog.
Of course, emotions (like everything else) are only good in moderation, but I don’t see anyone disagreeing or acting to the contrary.
But why anyone should do it?
I said why in my first post, but I’ll repeat it here: “[Snark] almost never helps. [Snark is] almost never as funny or clever or insightful as you think it is.” It goes without saying, but I’ll say it anyhow: you may disagree.
we have emotions, snark is one of the means of conveying emotion
I agree with both of these, and I definitely didn’t want to argue for emotion-free or (as you put it nicely) ultra-pasteurized prose. I think it boils down to what I said above: I think snark is pretty much always unhelpful. Not humor, not anger, not emotion in general—snark.
My serious suggestion: if you find yourself writing “I don’t want to sound mean,” then immediately stop and delete the sentence. Just don’t say it. It almost never helps. It’s almost never as funny or clever or insightful as you think it is.
See, I generally agree with this and have huge respect for the Go developers. That said, assuming that this sentence is wholly about being ‘funny’ or ‘clever’ is harsh.
If you are trying to express incredulity, there’s only so many ways you can say “It’s surprising to me that someone thought this was a good idea.” without it implicitly casting shade on the creators. That doesn’t change that “It’s surprising to me that someone thought this was a good idea.” can still be a valuable idea to express.
Unrelated: Thanks for catching the typo! I hope you liked the post overall.
[A]ssuming that this sentence is wholly about being ‘funny’ or ‘clever’ is harsh.
If you are trying to express incredulity, there’s only so many ways you can say “It’s surprising to me that someone thought this was a good idea.” without it implicitly casting shade on the creators. That doesn’t change that “It’s surprising to me that someone thought this was a good idea.” can still be a valuable idea to express.
…I hope you liked the post overall.
(I’m going to respond to these in reverse order.) First, I did like the post overall. Thanks for writing it. You helped me to put a name on things about Go that I dislike but that I thought were unrelated. Specifically, the problem about returning (value, error)
in concurrent problem and (one part of) the weirdness of the new iter
module.
Second, you’re right that I should assume good intentions. I understand what you say here about trying to say “It’s surprising to me that someone thought that this was a good idea.” That said, the way you said it pulled me out of the post and made me groan. We can agree to disagree about whether that part of the post was the right way to go.
I’ve been using Go since 2011. 90% of my open source code is Go.
It is not a good language. It is easy, not simple.
It has a bad type system. It has warts that paper over serious issues. There are many footguns.
The build system is bad. Modules are bad at scale. There lots of bugs in the package management system and absolutely no visibility into why things fail. GOSUMDB is bad. Conditional compilation is incomplete and hamstrung. cgo is slow (May have improved from 1.18, can’t confirm). Refactoring Go code is a nightmare. None of these issues crop up if you’re new or your project is small.
Structural interfaces are a mistake. ffs Rob Pike open a book written after the 70s.
The type system is too brittle for good concurrency abstractions, but the language has too many footguns for 99% of programmers to do it properly without them. I’ve worked in multiple companies that all completely misunderstand concurrency in Go at the highest levels.
Most people don’t really understand Context. It is also doesn’t perform very well, and conflates a bunch of concepts. I think it was shooed in for Ajmani, bad call.
A lot of the auxiliary blessed packages in golang.org/x are substandard. For example the IP packages in x/net are awful. It railroads Go into only being good for things that are included in the box. Web servers, high throughput networking. The type system isn’t even really good enough to abstract CLI argument parsing very well, so that’s a slog every time.
A few good things, mainly around the runtime:
The GC is pretty good. Hat tip to the folks that were hired to fix that (starting Go 1.3 onwards). I think Clements was a big part of this and deserves to replace Cox.
It’s the only language I’m familiar with that picks the best concurrency threading primitive (the other being Haskell).
The custom toolchain is unfortunate but the tooling is workable unlike other languages (ie first class profiling although 95% of programmers never use it).
The race detector is fantastic. I think it was first class since 1.2 or 1.5 I can’t remember. Good effort I think by Dyukov.
I’ve not found the concurrency story to be a problem, although there are parts of the language where I’d prefer to have more sophisticated forms:
immutable/const types, value or immutable captures for lambdas, slice (as a view) vs growable array, a proper pure enum (vs tagged union) rather than the const/con plus iota game copied from Limbo.
For the concurrency, it is largely good. As I prefer CSP to threads+locks, and this is an improvement over libthread (or libtask) with C.
As to context, I’ve used it in a way apparently not envisaged by most folks (or the docs), namely applying a timeout/lifetime to comms channels, vs any form of ‘request’ timeout. As such I could safely store it in objects, vs pass it as a parameter.
I’ve worked in multiple companies that all completely misunderstand concurrency in Go at the highest levels.
I’d be really interested to hear (even highly abstracted/redacted versions of) these war stories.
No, zero values were Go’s biggest mistake. I wrote about this here:
Of course it is also absolutely inexcusable that Go has multiple return values, but not tuples (but it has special support for calling a 2-return value function inside the argument position of a call to a function that takes two parameters…).
Regarding the first post, why do you think nulls preclude ADTs? Java has proper ADTs now (records and sealed classes/interfaces) and they do have nulls.
Is there something specific about Go’s nulls that preclude ADTs, or do you mean it just can’t be made ergonomic?
Not nulls (nil
), but zero values.
In Rust (or C++, Java, C#, etc) if you define types like this:
pub struct NonZeroI32 {
inner: i32,
}
impl NonZeroI32 {
pub fn new(x: i32) -> Option<NonZeroI32> {
if x == 0 { return None; }
Some(NonZeroI32 { inner: x })
}
}
pub enum MyEnum { A(i32), B(u32) }
then code outside that module can’t construct a NonZeroI32
directly, and constructing a MyEnum
requires selecting one of the enum’s cases. These properties allow Rust code to enforce dynamic constraints via the type system.
In Go the existence of zero values means that other modules can freely construct a NonZeroI32 { inner: 0 }
, which means you can’t rely on any given value having been created with valid content. But what’s worse is that MyEnum
would have a zero value of … what exactly? How does the compiler decide which case is zero?
So in Rust terms, the “zero value” for type X
is what <X as Default>::default()
returns. And Default
is auto-implemented for all types in Go and you can’t change it. Is that right?
Yep, pretty much.
It works ok-ish for Go because it’s just part of the language, every struct
can be zero and code needs to handle that case. Or panic, for example:
type Foo struct {
// ...
ok bool
}
func NewFoo() Foo {
return Foo { ..., ok: true }
}
func (f *Foo) DoSomething() {
if !f.ok { panic("uninitialized Foo") }
// ...
}
It’s not my favorite part of the language, and the Go developers response is (as always) “if your code crashes because a user held it wrong then just tell them to stop doing that”. Much less interest in compile-time correctness than other modern languages in the same niche.
I haven’t heard a good reason why it can’t just be the first case.
That would be the most Go-like solution, but ADTs in other languages don’t attach special meaning to the order of cases. Semantically it would be a problem.
(Using a pseudo-Go syntax for the sake of example)
type Foo enum {
A(int)
B(string)
}
var f Foo
switch f.(enum) {
// Selected branch depends on order that
// A and B are defined in.
case Foo.A(x): ...
case Foo.B(x): ...
}
The Odin version of that code, which actually supports that form is:
package hello
import "core:fmt"
Foo :: union { int, string, }
main :: proc() {
f : Foo
switch fv in f {
case int: fmt.printfln("int: %v", fv);
case string: fmt.printfln("string: %v", fv);
case nil: fmt.printfln("uninit");
}
}
since it defines the ‘zero value’ for tagged unions as being ‘nil’.
Otherwise one can add ‘#no_nil’ to the union, and have the first case ‘int’ here, be the ‘zero value’. Exactly as proposed above for Go.
Interesting because it’s one of my favorite parts of the language and allows me to render views without worrying if a value has been filled since they aren’t all zerod to nil, causing an exception.
The author is correct. Go would have better simplicity, orthogonality, and composability if it had tuple values. I design languages as a hobby, and I wouldn’t design a language like Go for these reasons.
Based on the same technical design considerations, Go functions should not accept multiple arguments, in the way they do now. Instead, they should accept a single argument, which can be a tuple. Just as you should be able to easily abstract over the result of a function, you should also be able to easily abstract over a function’s argument or argument list. The languages I design work this way.
I don’t want to make a big deal about this, because none of the top ten languages are designed the way I prefer, and I don’t want to say that they are all mistakes or practical jokes. People seem to use them and get things done despite what I consider to be excess complexity, non-orthogonality, and lack of composability.
I’m still unsure if I subscribe to the “All functions should only take a single argument.” thing. On the one hand it sounds elegant, on the other hand it just doesn’t match how most e.g. methods work.
Methods often have two argument: One object/struct/piece of state that you’re acting on, and then some additional data which you pass in. I’m still not sure how comfortable I’d feel treating this instead as ‘a single tuple argument’, since the semantic distinction (value that is acted upon and additional data) is useful.
To be honest, I’ve tried several design variants, and my latest language uses the convention of 2 curried arguments in the method case: the “primary” argument and the “some additional data” argument. I may have been somewhat influenced by Python, which, at least at the syntactic level, has curried methods. I’m also influenced by Haskell, as I didn’t know that what Python does is called “currying” until I learned Haskell.
Python doesn’t have currying…
Try this in the Python REPL:
>>> a=[1,2,3]
>>> f=a.append
>>> f(4)
>>> a
[1, 2, 3, 4]
Method calls are curried. In this case, a.append
returns a function, which, if called, appends an element to the list a
.
I see what you’re saying, but currying is a general case for functions of any arity.
What you have above is called a bound method.
Let’s not create confusion by messing around with definitions. It’s important to have standard terminology that has the same meaning across different programming languages, otherwise you can’t have a meaningful discussion about programming language design.
The definition of currying is this: a function is curried if you call it with some of its arguments, and it returns another function that when called consumes more of its arguments. You can implement curried functions in Python, Javascript, and any language that supports function closures.
All Python method calls are curried, as a built-in feature. It doesn’t matter that the Python community doesn’t use this terminology.
What I originally said is that this feature of Python was part of the inspiration for curried method calls in the new language I am developing. I was trying to communicate by using exact language that has a generally accepted meaning.
“currying is a technique that transforms a function with multiple arguments into a sequence of functions, each taking a single argument, allowing for partial application and creating new functions with fewer parameters.”
It’s not the same thing as you’re describing. Even bound methods aren’t currying because it doesn’t produce a function that receives the next argument.
I think you are over generalizing the common definition of currying in your initial comment You’re right that Python does have first class functions, and first class functions can be used to implement currying, but Python does not have ergonomic facilities for creating or calling curried functions in the common sense of Haskell or the lambda calculus. (Although with decorators, partial implementations of currying are possible and more ergonomic than you’d expect!)
To clarify, here is a function of three arguments, and a curried version of that function in the sense of the lambda calculus:
def add3(a, b, c):
return a + b + c
def curry3(a):
return lambda b: lambda c: a + b + c
The important difference is how you call them. add3
can be called with all three arguments in one pass, but curry3
requires three stages of argument passing to actually get a result:
add3(7,8,9)
curry3(7)(8)(9)
There’s Haskell and other languages with currying :D
(Just joking. I don’t even know if I would say that Haskell functions have multiple arguments or just one.)
There are two versions of Haskell, one with multiple arguments, and one without. It’s non-deterministic which will be used when you invoke ghc.
Two questions: (1) Really? (2) What in the world does that mean and how does it even work? (Also, long time no talk: hope you’re good.)
Just a metaphysics joke. It’s common to describe Haskell as not really having multiple argument functions, just currying with single argument functions.
As far as I can tell, this doesn’t correspond to any real distinction–whether you “really” have multiple argument functions, or just single argument functions does not make any difference. So if there were two different versions of Haskell that differed that way, you could write a version of GHC that dynamically switched which one you invoke at compile time.
And doing pretty well, hope you’re doing well too!
If you think of it as receiver + message, where the name of the method is the label attached to a record of arguments, it collapses back to being “every function (object) accepts exactly one argument (message)”
From another site’s thread about this same blog post, I wrote:
–
I love multiple return values, and I’m glad Go included it. [..] The way we addressed this in Ecstasy (xtclang) was to imagine the junction between the call site and the function as a nexus of sorts, where the caller can indicate that they want multiple result values back, or the caller can indicate that they want a tuple of result vales back, and the callee doesn’t know and doesn’t care. In most cases, the design allows for the elimination of allocation of a tuple altogether, although if the caller wants to get and hold onto a tuple, the allocation is going to be realized. So given some function (Boolean, Int, String, Int[]) foo()
, the caller can do any of:
Int x = foo();
(Int x, String s) = foo();
(Int x, String s, Int[] a) = foo();
(_, String s, Int[] a) = foo(); // etc.
Tuple t = foo();
Tuple<Int, String, Int> t = foo();
// this enters the if block iff the Boolean return is True
if ((Int x, String s, Int[] a) := foo()) {...}
… and so on.
The same applies to parameters as well, in that the caller can provide the parameters as a tuple instead of the individual, separate values. This doesn’t get used very often outside of reflection, but it is quite useful when you need it. The compiler does all of the heavy lifting, which includes type safety enforcement, etc.
For people who are curious, here are a couple of relatively recent proposals that address some of what the post complains about.
The first had some traction with core team members but seems to have fizzled. The second also hasn’t gone anywhere in some time.
proposal: spec: tuples as sugar for structs
Interesting one. I wrote quite some code in Limbo. It had tuples. You would normally have to unpack the tuple to access the items. But you could also access the fields with .t0
, .t1
and so on. I don’t think that was in the language spec. But it was used. Example: https://github.com/mjl-/ssh/blob/master/appl/cmd/sftpfs.b#L742
Btw, in practice, I just sometimes miss tuples for sending over channels. Those are often composite types that I don’t use anywhere else. It’s a little cumbersome to define a type for it. But it’s also not a big deal, and probably helps readability.
For those interested, see https://www.vitanuova.com/inferno/papers/limbo.html. Can’t link to sections… Look for tuple. Example:
chan of (int, string)
c <-= (123, "Hello");
Definitely not, given its success.
When I was young I was really obsessed in engaging in flame wars LanguageA vs LanguageB. Since then something switched in my head and I completely lost seeing sense in this due to one simple reason. I started to believe that “design mistake”, “shortcoming”, etc. are simply the wrong terms/approach to look at it. It’s all just about the set of design compromises chosen by authors to achieve their goals, no more no less.
And for me as programmer it’s just the matter of taste. For example, I personally dislike LISPs. However this doesn’t prevent me from being glad for people that are glad using them. It’s just that I won’t choose any kind of LISP to write my next project. For another example, in far past I used to dislike Perl and actively like Python to an extent of trying to convince everyone of correctness of this point of view. How funny is remembering this now, when my tastes (at least regarding these two) almost switched to the opposite.
It’s all just about the set of design compromises chosen by authors to achieve their goals, no more no less.
The same cop-out all the time.
Go is very poorly designed and archaic. It’s not simple, just primitive. It’s design if full of glaring mistakes. Golang could have been a way better designed language, without compromising the good properties that make it successful. It’s not some “set of design compromises” that made all the crappy parts neccessary. But it doesn’t matter. It is successful irrespective of how poorly it is designed, because technologies being successful have very little to do with their design. Just like the most popular music is generally cheap pop on autotune, and most popular movies are are 15th version of avengers. Or how JS has a very poor design, but used everywhere, because … web browser.
Go had Google’s support which gave a lot of people a lot of confidence it’s worth investing in it, and it’s easy concurrency and deployment story met the needs of growing market of scalable web-first systems, and every growing demand for replaceable mid-tier developers that just need to get the job done. And most real life backend systems just shuffle data between http endpoint and a database, and even if you screw up bunch of things in it, liveness probe is just going to restart it, or the user will complain and try again. And generally most software is only as good, as absolute bare minimum required for the business not to suffer too much.
I think you are objecting to attributing Go’s success to be direct proof of it’s design quality.
A programming language’s popularity can be attributable to many factors, and I think that you would be correct to point out that since quality is not the only factor, it follows then that a programming language’s popularity and it’s quality are not directly equivalent. So a more cautious statement could instead have been “while programming language popularity is not the same as quality, Go’s rapid adoption seems to imply that Go has at least made some reasonable design decisions”.
It is then ironic that your own objection appears to have been made just as incautiously.
The same cop-out all the time. Go is very poorly designed and archaic…
Software engineering is applying computer science to solve problem(s).
Here, your counterclaim appears to be that the Go programming language and tooling, when objectively measured against the general problem(s) of modern programming are less fit for purpose than other (unspecified) programming languages.
I hope that when stated that way you can agree that is not possible. There is no yet agreed upon objective measurement for general purpose programming languages quality of design.
Different programming languages have different tradeoffs and design compromises.
If you have a specific problem, you can implement it in a number of different languages, and measure how those perform in an objective way. If experts in that language confirm the implementation is sound, and you repeat that enough times, you might start to see some patterns and draw some tentative conclusions about the suitability of different programming languages for different problem contexts. If the designers stated intent was to design a language that was a good candidate for programs in one such specific problem context then you might be able to form a reasonably well-informed opinion about “programming language design quality” based on those conclusions.
I think multiple-return actually has another additional poor interaction which the article doesn’t touch on, which is how it changes the behavior of the :=
operator in two confusing ways.
The classic example I have here is that in Go, unused variables are an error, so err := f()
will be an error if you ignore the err
variable entirely.
However, if you have multiple return, x, err := f()
, ignoring that err
variable might be a compiler error depending on other context. Specifically:
// compiler error, 'declared and not used: err'
func f1() int {
val, err := someFunction()
return val
}
// but this compiles fine, even though the second 'err' isn't used
func f2() int {
val1, err := someOtherFunction()
if err != nil { return 0 }
val2, err := someFunction()
return val1 + val2
}
The reason for that is that :=
declares a variable, except for in multiple return it assigns any existing variables with the same name if at least one variable (val2
in this case) is new. That’s confusing, and leads to real bugs in my experience.
The second confusing interaction here is that the declare-vs-assign thing also changes the type of variables based on surrounding code.
For example, if you have:
func concreteError() (*os.File, *os.PathError) {
return nil, &os.PathError{}
}
func main() {
// f, err := os.Open("/file") // return type (*os.File, error)
f2, err := concreteError()
// 'err' is of type '*os.PathError' here, but if someone later, without
// editing this line, uncomments out the first line in main, 'err' changes to
// type 'error' here, it becomes an interface. This is weird spooky action at
// a distance
}
I’ve seen both of these issues a few times in practice, with the first one being more of an issue, and the second not really occurring for err
(since every uses the error interface), but rather for generic “value” names, when someone accidentally re-uses the same variable name for an interface and non-interface type.
I loved this post. Thanks for sharing it. Multiple value returns used to be a source of contention with new gophers that often would ask “can I use a struct here instead?”
My guess is that the core team is targeting a non-backward compatible release sometime soon. The reason is that i say so is that they have been working on a tool to modernize go source code bases. Sort of a supercharged “go fix”. If they decided to draw inspiration from Haskell and use sum types for value and error handling in a non-backward compatible way, the modernizer tool could make this idea viable. Ah, and also, go.mod
now support compiler selection, so technically, they can make a major release the covers this possibility while giving time to the ecosystem to migrate.
My guess is that the core team is targeting a non-backward compatible release sometime soon.
Never say never, but I doubt it based on this post: https://go.dev/blog/compat.
That raises an obvious question: when should we expect the Go 2 specification that breaks old Go 1 programs?
The answer is never. Go 2, in the sense of breaking with the past and no longer compiling old programs, is never going to happen. Go 2 in the sense of being the major revision of Go 1 we started toward in 2017 has already happened.
I loved this post. Thanks for sharing it.
Thank you for the kind words! (I agree with the other replier that a fully non-backwards compatible change is unlikely, alas.)
This (of course) doesn’t compile since (string, error) is not a type
This is not a bug but a feature of the language. It forces you to do something about the error, log and ignore it, or do an early return, or do something else. This brings the code to production quality much more quickly for me than other languages.
It definitely doesn’t force you to handle the error. The linter might shout at you in certain cases at most.
I think it was talked about many times in many places, but it’s not ignoring the error. Ignore it as you will.
But what it does do is makes sure you know there is one. Some functions may have implicit errors, nil-pointer type crap, and you don’t know you have a bug there. But that’s kinda the point - you don’t have the error, you have a bug.
With error as valu, you make sure that you are aware of the known places there things might go boom. If you choose to ignore these, because “network is zero latency, disk space is unlimited and user input is perfect” type reasons, that’s okay, but you can’t say you didn’t know.
force
Java doesn’t force you either. You can catch and ignore checked exceptions.
Which is a deliberate action you have to take, enforced by the compiler. If a function previously hasn’t thrown a checked exception and is refactored to throw one, now all the call sites are forced to handle that case.
I can’t help to think that these “multiple values” shouldn’t be called like this because they actually return 1 thing, that needs to be unpacked. Because in Common Lisp, it’s different -and super useful.
Say your function returns an element:
(defun foo ()
:hello)
you call it and bind its output to a variable:
(setf var (foo))
Now you decide that foo
should return multiple values: you can make the change and everything keeps working, you don’t need to update all the callers.
(defun foo ()
(values :hello t))
if you do this:
(setf var (foo))
then var is still “hello”, the second value was ignored. You didn’t break anything. You must explicitly use multiple-value-bind
, or nth-value
, to capture the multiple values.
One of the things that make extending programs and refactorings easy.
Now you decide that foo should return multiple values: you can make the change and everything keeps working, you don’t need to update all the callers.
Forgive me if I didn’t understand your point but the callers would be silently ignoring the new return values of the function, which might be ok or not depending on the situation. If you go the other way around (turning a tuple into a single value) you actually break every caller as they’d be expecting more than 1 value but now everything else is nil
. The return tuple is part of the type of the function in most (all?) typed languages with them because of that.
What you seem to be describing is more a quality of dynamic languages in general: changing return types and the code working its way around the fact. I agree it’s a nice feature for refactoring but we lose some assurances with that. It has been quite a while since I’ve written CL so maybe I’m lost here.
the callers would be silently ignoring the new return values of the function, which might be ok or not depending on the situation.
correct. I think it’s great. There’s a difference between returning one data structure (against which we could unpack values) and returning one or multiple values. An example.
Dexador’s (dex:get "http://url")
function returns multiple values. It’s a choice: it could have returned an object containing everything about the request. The first value is the request’s response body, that’s what we generally want to get. The second value is the request status, the third a hash-table with the response headers… there are 5 in total. The library can return a 6th value, it doesn’t impact any existing callers. If you want the new feature, use it, but otherwise, you’re good.
If you go the other way around (turning a tuple into a single value) you actually break every caller
yes but in which language is that? In Go, this change breaks the callers. In CL, this change might break the callers too (or the code after them) (like if you returned a list or a struct then something else), but returning multiple values is not returning a tuple, so turning a values
to just one value might confuse callers that expect a second value but will get nil
instead (they’ll get nil values so this won’t break callers either).
The return tuple is part of the type of the function in most (all?) typed languages
we can also type CL function’s arguments and return types and get some warnings at compile time,
but the languages are too different so this is probably why my intervention might fall short!
a quality of dynamic languages in general
this multiple values feature is certainly specific to CL (and Scheme https://www.gnu.org/software/guile/manual/html_node/Multiple-Values.html), definitely not Python, nor Elixir.
This was actually one of the most surprising things that took me a bit to internalize when I was learning Common Lisp.
I am not a Go developer but I love reading these kind of deep dives into languages. Odin also has multiple return values, and this made me wonder if they’re such a good idea after all. Something I could ask the Odin developer(s) I’m sure.
Why in general have the for loops, while do is simpler.
That’s backwards, for loops can be seen as an extension to while loops. Tuples can be an extension to shoehorned multiple values.
Cmd-F “elixir’”: zero results. How strange.
I think elixir gets this right, by having single return values but also having first-class tuples, pattern-matching, and well established idioms for both. Because the tuples are just plain values you can pass them around as normal, and/or use pattern matching to get at their contents easily and make decisions based on them, or you can put them in data structures for later processing. Anything you can think of, because tuples are just plain values.
case some_operation() do
{:ok, thing} ->
#...
{:error, e} ->
#...
end
So in practice you get a lightweight kind of multiple return values but without any downsides of making them a special feature of the language.
So much of elixir has this feeling of being composed of smaller features that slot together into a coherent whole. Go on the other hand feels like the opposite.
Of course, this is not a new invention in Elixir: Erlang works exactly the same, and all languages in the ML/Haskell family too (including, dare I say it, Rust).
Typescript also bolts on tuples to JS, in that you can have an array type like [string, number]
which is the type of arrays of length 2 whose first element is a string and whose second element is a number. And you can unpack it since JS supports array unpacking.
Yeah even Python has this, and it’s really nicely done. Common feature, but not in Go it seems.
Rust has it. I bet you can even do it in C++ if you use enough extensions
To me, the thing that makes this feature significantly more useful in e.g. Haskell, SML and Rust, is the concept of exhaustiveness checking: In those languages the compiler will tell you whether you’ve matched on all possible values, even with deeply nested pattern matching. I haven’t seen this work satisfactorily in a language without ADTs and static (HM-style) type checking.
I have no experience with Elixir. Can you give me the three or five sentence pitch why it’s worth learning / looking into (beyond what you already said)?
It fills the same use cases as Erlang but
value |> fn1 |> fn2 |> fn3
pipelining operator)It’s worth looking in to if you like covering yourself in the poo that is Ruby syntax and coding practice. There are much nicer languages built on the BEAM.