The long season of langdev
40 points by eschew
40 points by eschew
There are still decades-old low-hanging fruit out there that haven’t been incorporated into mainstream languages or any current ones at all. One of my particular hobby horses is delimited continuations, which haven’t even made it into Scheme after 30 years, despite being an obvious upgrade to the existing continuations. They’re in Racket and also somewhat unexpectedly Java (the new green threads are based on delimited continuations).
Another point, I think the article misses that most languages evolve, and there has been a steady accumulation of features in existing languages during this “barren” period. Haskell has received a ton of facilities for type level programming, C# has gotten a lot of features (pattern matching, records, all the performance-related value reference stuff), Rust is not at all what it was back then, Scala 3 is a huge overhaul, even Java has gotten in on the act, etc.
One of my particular hobby horses is delimited continuations, which haven’t even made it into Scheme after 30 years, despite being an obvious upgrade to the existing continuations
They’re slowly coming back, they’re just called “effects” now, and they’re now typed! excited
oh oh wait! Explain to me (links will work) the connection between delimited continuations and typed effects. I sort of understand both of these, but I don’t yet see how they correspond. And suddenly getting to see an isomorphism is the BEST KIND OF new learning!
Hmm.. not sure what would be the best reading for this but I have a relevant blog post here and a langdev stackexchange answer here.
Basically many languages with effect systems (e.g. OCaml 5’s effect system, Koka) use delimited continuations under the hood. When you invoke an effect you pass the continuation of the effect invocation to the effect handler. The effect handler can then call the continuation with the value of the effect being invoked.
There are some exciting work being done in this area. Other than the effect systems and effectul languages, one of the things that I’m excited about is Wasm stack switching instructions, which will make continuations available to languges that compile to Wasm.
Sorry I don’t have a definitive answer, but hopefully the links will be helpful.
Btw, thank you for mentioning tests in the “Why” post. It is the same reason I am enthusiastic for effects and working on them on my toy language.
But I rarely see that offered as example and I found that devs really relate and love that one.
Look at an effect handler as a function that get called with a suspended stack, does its job then tail call back the stack.
Doesn’t it look a lot like delimited continuations?
Iirc the work to provide Haskell with delimited continuations was around enabling effects systems.
What advantage does a delimited continuation have over regular continuations?
An example: Say that you are in function A that starts a long-running computation that uses continuations so that A can halt it if it has enough data. Halfway through that computation you get a partial result bundled with a continuation that resumes the computation if you want to get further results. Say that A wants to give this continuation to function B (which could be an UI asking the user to continue). If the continuation is delimited from function A, when B calls it it will proceed through the computation and return to B when it would have returned to A originally. If the continuation is a classic undelimited Scheme continuation it will continue and execute the rest of function A and A’s caller and so on until the top of the stack.
In general they compose better and you can make undelimited continuations from them (since you can have one delimited to the entire program).
I mean, it’s been nearly twenty years since any large release of scheme has occurred, but I’d like to imagine that they’ll get into R7RS large (I have not been following the process so it’s hard to say to what degree this is wishfull thinking), as they’re pretty trivial to implement if code is converted to CPS
Hm, I don’t quite understand the post. If we speak about industrial impact, then Rust, Kotlin, and I think Go and TypeScript didn’t have much impact circa 2013. If we speak about advancing the field of language design itself, then we had two recent revolutions with how to approach generics (Swift and Zig(yes, D can do most of the things Zig can do, no, the trick is to not do anything else (though D certainly belongs to the original list))), and, while I can’t say much about it, from the looks of it LEAN seems quite important?
we had two recent revolutions with how to approach generics (Swift and Zig)
With Zig you’re referring to comptime but I’m not too familiar Swift, what are you referring to? (I only know some stuff about variadic generics)
Swift solved generics dilema: https://research.swtch.com/generic. They have separately-compiled generics which allow for monomorphisation.
Wild that they wrote a 632 page book for just this aspect of the language.
I wonder if this book was funded as a part of Swift development or whether it was extra-curricular. I would think that writing a 600 page book should take at least months of full-time work.
His bio (https://factorcode.org/slava/) implies Apple pays for this work. The fact that it’s hosted on swift.org is also important
Very interesting links, thanks!
Polymorphic code needs to work with any type, and structs don’t contain any identifying runtime information. Worse yet, polymorphic code can be called without providing any values of that type! So the caller needs to pass in type information. Abstractly, we could pass in minimal information and have the polymorphic code look up all the witness tables, but that’s really wasteful (consider calling the function in a loop). So instead Swift’s actual implementation has the caller pass in pointers to every required witness table as extra arguments (automatically handled by the compiler).
This reminds me of how Go implements its “magically generic” built-in operations on maps and slices. For example, inserting into a map, like:
m["foo"] = 123
compiles down to a call to mapassign
, like (pseudo-code):
mapassign(&typeInfoOfMapStringInt, &m, &"foo")
where typeInfoOfMapStringInt
is run-time type information akin to that “witness table” concept.
Funnily enough, this approach then didn’t make it into the user-available generics, which just monomorphize.
this approach then didn’t make it into the user-available generics, which just monomorphize
Is this true? I could be misunderstanding the claim (or gc’s implementation strategy for generics has changed significantly since 1.18), but the original generics implementation design document for Go outlines something that I wouldn’t quite describe as full monomorphization. Specifically, the strategy that doc describes is roughly as follows: generic functions don’t compile to a separate concrete instantiation per type; rather, there is one instantiation per “GC shape”, with additional runtime type information (referred to as a “dictionary”) provided as needed. In particular, multiple related types can and do share the same GC shape.
In fact, at a glance the dictionary seems analogous to a witness table, and so–if my understanding is correct–userland generics do use a variant on the approach you mention at the start of your comment.
Ah, interesting! I guess there is some kind of heuristic going on, because if I try it with types of what should be the same shape, I seem to get several instantiations: https://gcc.godbolt.org/z/78nPfEzaz Maybe it has to do with the fact that the functions end up inlined, so they’re not actually used.
edit: Actually they make it very clear what constitutes the same “gcshape”:
Two concrete types are in the same gcshape grouping if and only if they have the same underlying type or they are both pointer types.
I don’t know if it’s “fewer novel languages are being invented”, or “fewer novel languages are breaking into the mainstream consciousness”. Which could be because we’re now saturated with Pretty Good Languages or because it takes time to find out what wins. If Zig and Hare become ultra-popular in 2030, will people look on 2016-2019 as a golden age of systems langdev, and wonder why nothing happened in 2028?
Anyway, to support Fogus’s point that innovation may happen in purposeful points, some domain-specific innovation happening since 2012ish:
Funny, I always felt the exact opposite. As a language nerd growing up in the 2000’s I was starved. Between 2000 and 2010 or so your choices for writing practical programs were: C/C++, Java, and C#, plus PHP and some other weird web stuff like ColdFusion. There was also Python, Perl and Ruby and JS, but nobody wrote much real stuff in those. And if you were a weird nerd like me you also had a handful of functional languages or other off-beat things: Common Lisp, Scheme, OCaml, or Haskell if you went really off the deep end. Then slooooowly scripting languages got into the mainstream for actual application dev, Go pried open the minds of C programmers and node.js did the same for backend webdevs, and you started getting compelling new languages like Dart, Elixir and such in a rapid-fire explosion between ~2009 and ~2013.
It also depends where you measure from. Kotlin, Elixir and Dart on might have started between 2010-2013 but they sure as heck didn’t become big in that time. Languages take time to build momentum, if they ever do. When did people actually start using Typescript in earnest? If I recall, it took a while.
The period from around 2002 to ~2013 was an amazing time to be a langdev geek but the years between 2014 and 2021 were humdrum.
Wat. Rust, anyone? Zig? The half-dozen little nascent spin-offs that they’ve birthed? Gleam? Roc? Sure it was overall, a time of consolidation; that tends to happen after revolutions. (Does anyone seriously miss Boo or Groovy or Fortress, or expect the world to be taken by storm by HaXe or PicoLisp?) The wheel turns, the cycle continues, and it takes time for new ideas to filter between industry and academia and back, or between different branches of them.
Kotlin, Elixir and Dart on might have started between 2010-2013 but they sure as heck didn’t become big in that time.
Language nerds probably didn’t wait til they got big, though, at least I didn’t. I wrote a tic tac toe game in the first Kotlin beta that had JS support (0.8?).
Rust, anyone?
First appeared in 2012, according to wikipedia.
Yeah but Rust 1.0 hit at the end of 2015, and that’s when it actually kinda started getting popular. IIRC Graydon noodled around with Rust in private for years as well, looking at the rustc repo the first commit is in 2010. And the first commit is 33,000 lines of OCaml and some early commit messages imply it was imported from a hg repo.
It’s hard to measure when ideas “start”.
To be fair, the article leads with:
For programming language enthusiasts, the last decade has felt like one of Ballard’s “long seasons.”
And for programming language enthusiasts, it’s fair to assume that they pay attention to a language long before it reaches 1.0.
Zig, for instance, hasn’t reached 1.0 but is still well known.
As robinheghan pointed out, the post was written from an enthusiast’s perspective. People who create and work on new programming languages also tend to be enthusiasts, so the ecosystem of new languages and ideas tends to find an awareness amongst that group much sooner than it takes for any language to become popular.