A 10x Faster TypeScript
156 points by whjms
156 points by whjms
The TypeScript dev lead posted this response about the language choice on Reddit, for anyone who’s curious:
(dev lead of TypeScript here, hi!)
We definitely knew when choosing Go that there were going to be people questioning why we didn’t choose Rust. It’s a good question because Rust is an excellent language, and barring other constraints, is a strong first choice when writing new native code.
Portability (i.e. the ability to make a new codebase that is algorithmically similar to the current one) was always a key constraint here as we thought about how to do this. We tried tons of approaches to get to a representation that would have made that port approach tractable in Rust, but all of them either had unacceptable trade-offs (perf, ergonomics, etc.) or devolved in to “write your own GC”-style strategies. Some of them came close, but often required dropping into lots of unsafe code, and there just didn’t seem to be many combinations of primitives in Rust that allow for an ergonomic port of JavaScript code (which is pretty unsurprising when phrased that way - most languages don’t prioritize making it easy to port from JavaScript/TypeScript!).
In the end we had two options - do a complete from-scrach rewrite in Rust, which could take years and yield an incompatible version of TypeScript that no one could actually use, or just do a port in Go and get something usable in a year or so and have something that’s extremely compatible in terms of semantics and extremely competitive in terms of performance.
And it’s not even super clear what the upside of doing that would be (apart from not having to deal with so many “Why didn’t you choose Rust?” questions). We still want a highly-separated API surface to keep our implementation options open, so Go’s interop shortcomings aren’t particularly relevant. Go has excellent code generation and excellent data representation, just like Rust. Go has excellent concurrency primitives, just like Rust. Single-core performance is within the margin of error. And while there might be a few performance wins to be had by using unsafe code in Go, we have gotten excellent performance and memory usage without using any unsafe primitives.
In our opinion, Rust succeeds wildly at its design goals, but “is straightforward to port to Rust from this particular JavaScript codebase” is very rationally not one of its design goals. It’s not one of Go’s either, but in our case given the way we’ve written the code so far, it does turn out to be pretty good at it.
Source: https://www.reddit.com/r/typescript/comments/1j8s467/comment/mh7ni9g
And it’s not even super clear what the upside of doing that would be (apart from not having to deal with so many “Why didn’t you choose Rust?” questions)
People really miss the forest for the trees.
I looked at the repo and the story seems clear to me: 12 people rewrote the TypeScript compiler in 5 months, getting a 10x speed improvement, with immediate portability to many different platforms, while not having written much Go before in their lives (although they are excellent programmers).
This is precisely the reason why Go was invented in the first place. “Why not Rust?” should not be the first thing that comes to mind.
I honestly do think the “Why not Rust?” question is a valid question to pop into someone’s head before reading the explanation for their choice.
First of all, if you’re the kind of nerd who happens to follow the JavaScript/TypeScript dev ecosystem, you will have seen a fair number of projects either written, or rewritten, in Rust recently. Granted, some tools are also being written/rewritten in other languages like Go and Zig. But, the point is that there’s enough mindshare around Rust in the JS/TS world that it’s fair to be curious why they didn’t choose Rust while other projects did. I don’t think we should assume the question is always antagonistic or from the “Rust Evangelism Strike Force”.
Also, it’s a popular opinion that languages with algebraic data types (among other things) are good candidates for parsers and compilers, so languages like OCaml and Rust might naturally rank highly in languages for consideration.
So, I honestly had the same question, initially. However, upon reading Anders’ explanation, I can absolutely see why Go was a good choice. And your analysis of the development metrics is also very relevant and solid support for their choice!
I guess I’m just saying, the Rust fanboys (myself, included) can be obnoxious, but I hope we don’t swing the pendulum too far the other way and assume that it’s never appropriate to bring Rust into a dev conversation (e.g., there really may be projects that should be rewritten in Rust, even if people might start cringing whenever they hear that now).
While tweaking a parser / interpreter a few years ago written in Go, I specifically replaced a struct with an ‘interface {}’ in order to exercise its pseudo-tagged-union mechanisms. Together with using type-switch form.
https://github.com/danos/yang/commit/c98b220f6a1da7eaffbefe464fd9e734da553af0
These day’s I’d actually make it a closed interface such that it is more akin to a tagged-union. Which I did for another project which was passing around instances of variant-structs (i.e. a tagged union), rather than building an AST.
So it is quite possible to use that pattern in Go as a form of sum-type, if for some reason one is inclined to use Go as the implementation language.
That is great explanation of “Why Go and not Rust?”
If you’re looking for “Why Go and not AOT-compiled C#?” see here: https://youtu.be/10qowKUW82U?t=1154s
A relevant quote is that C# has “some ahead-of-time compilation options available, but they’re not on all platforms and don’t really have a decade or more of hardening.”
That interview is really interesting, worth watching the whole thing.
Yeah Hjelsberg also talks about value types being necessary, or at least useful, in making language implementations fast
If you want value types and automatically managed memory, I think your only choices are Go, D, Swift, and C# (and very recently OCaml, though I’m not sure if that is fully done).
I guess Hjelsberg is conceding that value types are a bit “second class” in C#? I think I was surprised by the “class” and “struct” split, which seemed limiting, but I’ve never used it. [1]
And that is a lesson learned from the Oils Python -> C++ translation. We don’t have value types, because statically typed Python doesn’t, and that puts a cap on speed. (But we’re faster than bash in many cases, though slower in some too)
Related comment about GC and systems languages (e.g. once you have a million lines of C++, you probably want GC): https://lobste.rs/s/gpb0qh/garbage_collection_for_systems#c_rrypks
Now that I’ve worked on a garbage collector, I see a sweet spot in languages like Go and C# – they have both value types deallocated on the stack and GC. Both Java and Python lack this semantic, so the GCs have to do more work, and the programmer has less control.
There was also a talk that hinted at some GC-like patterns in Zig, and I proposed that TinyGo get “compressed pointers” like Hotspot and v8, and then you would basically have that:
https://lobste.rs/s/2ah6bi/programming_without_pointers#c_5g2nat
[1] BTW Guy Steele’s famous 1998 “growing a language” actually advocated value types in Java. AFAIK as of 2025, “Project Valhalla” has not landed yet
and very recently OCaml, though I’m not sure if that is fully done
Compilers written in OCaml are famous for being super-fast. See eg OCaml itself, Flow, Haxe, BuckleScript (now ReScript).
Yeah, I’m kind of curious about whether OCaml was considered at some point (I asked about this in the Reddit thread, haven’t gotten a reply yet).
OCaml seems much more similar to TS than Go, and has a proven track record when it comes to compilers. Maybe portability issues? (Good portability was mentioned as a must-have IIRC)
Maybe, but given that Flow, its main competitor, distributes binaries for all major platforms: https://github.com/facebook/flow/releases/tag/v0.264.0
Not sure what more TypeScript would have needed. In fact, Flow’s JavaScript parser is available as a separate library, so they would have shaved off at least a month from the proof of concept…
If you want value types and automatically managed memory, I think your only choices are Go, D, Swift, and C#
Also Nim.
Also Julia.
There surely are others.
Yes good points, I left out Nim and Julia. And apparently Crystal - https://colinsblog.net/2023-03-09-values-and-references/
Although thinking about it a bit more, I think Nim, Julia, (and maybe Crystal) are like C#, in that they are not as general as Go / D / Swift.
You don’t have a Foo*
type as well as a Foo
type, i.e. the layout is orthogonal to whether it’s a value or reference. Instead, Nim apparently has value objects and reference objects. I believe C# has “structs” for values and classes for references.
I think Hjelsberg was hinting at this category when saying Go wins a bit on expressiveness, and it’s also “as close to native as you can get with GC”.
I think the reason this Go’s model is uncommon is because it forces the GC to support interior pointers, which is a significant complication (e.g. it is not supported by WASM GC). Go basically has the C memory model, with garbage collection.
I think C#, Julia, and maybe Nim/Crystal do not support interior pointers (interested in corrections)
Someone should write a survey of how GC tracing works with each language :) (Nim’s default is reference counting without cycle collection.)
Yeah that’s interesting. Julia has a distinction between struct
(value) and mutable struct
(reference). You can use raw pointers but safe interior references (to an element of an array for example) include a normal reference to the (start of the) backing array, and the index.
I can understand how in Rust you can safely have an interior pointer as the borrow checker ensures a reference to an array element is valid for its lifetime (the array can’t be dropped or resized before the reference is dropped). I’m very curious - I would like to understand how Go’s tracing GC works with interior pointers now! (I would read such a survey).
Ok - Go’s GC seems to track a memory span for each object (struct or array), stored in kind of a span tree (interval tree) for easy lookup given some pointer to chase. Makes sense. I wonder if it smart enough to deallocate anything dangling from non-referenced elements of an array / fields of a struct, or just chooses to be conservative (and if so do users end up accidentally creating memory leaks very often)? What’s the performance impact of all of this compared to runtimes requiring non-interior references? The interior pointers themselves will be a performance win, at the expense of using an interval tree during the mark phase.
https://forum.golangbridge.org/t/how-gc-handles-interior-pointer/36195/5
It’s been a few years since I’ve written any Go, but I have a vague recollection that the difference between something being heap or stack allocated was (sometimes? always?) implicit based on compiler analysis of how you use the value. Is that right? How easy it, generally, to accidentally make something heap-allocated and GC’d?
That’s the only thing that makes me nervous about that as a selling point for performance. I feel like if I’m worried about stack vs heap or scoped vs memory-managed or whatever, I’d probably prefer something like Swift, Rust, or C# (I’m not familiar at all with how D’s optional GC stuff works).
Yes, that is a bit of control you give up with Go. Searching for “golang escape analysis”, this article is helpful:
https://medium.com/@trinad536/escape-analysis-in-golang-fc81b78f3550
$ go build -gcflags "-m" main.go
.\main.go:8:14: *y escapes to heap
.\main.go:11:13: x does not escape
So the toolchain is pretty transparent. This is actually something I would like for the Oils Python->C++ compiler, since we have many things that are “obviously” locals that end up being heap allocated. And some not so obvious cases. But I think having some simple escape analysis would be great.
Why did you leave JS/TS off the list? They seem to have left it off too and that confuses me deeply because it also has everything they need
Hejlsberg said they got about 3x performance from native compilation and value types, which also halved the memory usage of the compiler. They got a further 3x from shared-memory multithreading. He talked a lot about how neither of those are possible with the JavaScript runtime, which is why it wasn’t possible to make tsc 10x faster while keeping it written in TypeScript.
Yeah but I can get bigger memory wins while staying inside JS by sharing the data structures between many tools that currently hold copies of the same data: the linter, the pretty-formatter, the syntax highlighter, and the type checker
I can do this because I make my syntax tree nodes immutable! TS cannot make their syntax tree nodes immutable (even in JS where it’s possible) because they rely on the node.parent
reference. Because their nodes are mutable-but-typed-as-immutable, these nodes can never safely be passed as arguments outside the bounds of the TS ecosystem, a limitation that precludes the kind of cross-tool syntax tree reuse that I see as being the way forward
Hejlsberg said that the TypeScript syntax tree nodes are, in fact, immutable. This was crucial for parallelizing tsgo: it parses all the source files in parallel in the first phase, then typechecks in parallel in the second phase. The parse trees from the first phase are shared by all threads in the second phase. The two phases spread the work across threads differently. He talks about that kind of sharing and threading being impractical in JavaScript.
In fact he talks about tsc being designed around immutable and incrementally updatable data structures right from the start. It was one of the early non-batch compilers, hot on the heels of Roslyn, both being designed to support IDEs.
Really, you should watch the interview https://youtu.be/10qowKUW82U
AIUI a typical LSP implementation integrates all the tools you listed so they are sharing a syntax tree already.
It’s true that I haven’t watched the interview yet, but I have confirmed with the team that the nodes are not immutable. My context is different than Hejlsberg’s context. For Hejlsberg if something is immutable within the boundaries of TS, it’s immutable. Since I work on JS APIs if something isn’t actually locked down with Object.freeze
it isn’t immutable and can’t safely be treated as such. They can’t actually lock their objects down because they don’t actually completely follow the rules of immutability, and the biggest thing they do that you just can’t do with (real, proper) immutable structures is have a node.parent
reference.
So they have this kinda-immutable tech, but those guarantees only hold if all the code that ever holds a reference to the node is TS code. That is why all this other infrastructure that could stand to benefit from a shared standard format for frozen nodes can’t: it’s outside the walls of the TS fiefdom, so the nodes are meant to be used as immutable but any JS code (or any-typed code) the trees are ever exposed to would have the potential to ruin them by mutating the supposedly-immutable data
To be more specific about the node.parent
reference, if your tree is really truly immutable you need to replace a leaf node you must replace all the nodes on the direct path from the root to that leaf. TS does this, which is good.
The bad part is that then all the nodes you didn’t replace have chains of node.parent
references that lead to the old root instead of the new one. Fixing this with immutable nodes would mean replacing every node in the tree, so the only alternative is to mutate node.parent
, which means that 1) you can’t actually Object.freeze(node)
and 2) you don’t get all the wins of immutability since the old data structure is corrupted by the creation of the new one.
See https://ericlippert.com/2012/06/08/red-green-trees/ for why Roslyn’s key innovation in incremental syntax trees was actually breaking the node.parent
reference by splitting into the red and green trees, or as I call them paths and nodes. Nodes are deeply immutable trees and have no parents. Paths are like an address in a particular tree, tracking a node and its parents.
You are not joking, just the hack to make type checking itself parallel is well worth an entire hour!
Hm yeah it was a very good talk. My summary of the type checking part is
That is, the translation units aren’t completely independent. Type checking isn’t embarassingly parallel. But you can still parallelize it and still get enough speedup – he says ~3x from parallelism, and ~3x from Go’s better single core perf, which gives you ~10x overall.
What wasn’t said:
I guess this is just because, empirically, you don’t get more than 3x speedup.
That is interesting, but now I think it shows that TypeScript is not designed for parallel type checking. I’m not sure if other compilers do better though, like Rust (?) Apparently rustc uses the Rayon threading library. Though it’s hard to compare, since it also has to generate code
A separate thing I found kinda disappointing from the talk is that TypeScript is literally what the JavaScript code was. There was never a spec and will never be one. They have to do a line-for-line port.
There was somebody who made a lot of noise on the Github issue tracker about this, and it was basically closed “Won’t Fix” because “nobody who understands TypeScript well enough has enough time to work on a spec”. (Don’t have a link right now, but I saw it a few months ago)
I’m not sure if other compilers do better though, like Rust (?) Apparently rustc uses the Rayon threading library.
Work has been going on for years to parallelize rust’s frontend, but it apparently still has some issues, and so isn’t quite ready for prime time just yet, though it’s expected to be ready in the near term.
Under 8 cores and 8 threads, the parallel front end can reduce the clean build (cargo build with -Z threads=8 option) time by about 30% on average. (These crates are from compiler-benchmarks of rustc-perf)
Why the sharding is in 4 parts, and not # CPUs. Even dev machines have 8-16 cores these days, and servers can have 64-128 cores.
Pretty sure he said it was an arbitrary choice and they’d explore changing it. The ~10x optimization they’ve gotten so far is enough by itself to keep the project moving. Further optimization is bound to happen later.
I guess this is just because, empirically, you don’t get more than 3x speedup.
In my experience, once you start to do things “per core” and want to actually get performance out of it, you end up having to pay attention to caches, and get a bit into the weeds. Given just arbitrarily splitting up the work as part of the port has given a 10x speed increase, it’s likely they just didn’t feel like putting in the effort.
Can you share the timestamp to the discussion of this hack, for those who don’t have one hour?
I think this one: https://www.youtube.com/watch?v=10qowKUW82U&t=2522s
But check the chapters, they’re really split into good details. The video is interesting anyway, technically focused, no marketing spam. I can also highly recommend watching it.
Another point on “why Go and not C#” is that, he said, their current (typescript) compiler is highly functional, they use no classes at all. And Go is “just functions and data structures”, where C# has “a lot of classes”. Paraphrasing a little, but that’s roughly what he said.
They also posted a (slightly?) different response on GitHub: https://github.com/microsoft/typescript-go/discussions/411
Acknowledging some weak spots, Go’s in-proc JS interop story is not as good as some of its alternatives. We have upcoming plans to mitigate this, and are committed to offering a performant and ergonomic JS API.
Yes please!
Does this mean that the first frame of Doom can now be rendered in only ~1.2 days?
For folks lacking context, this is a reference to this video: https://youtu.be/0mCsluv5FXA where the creator emulated DOOM using just TS types. It took 12 days to render the first frame of the game.
The author of that project replied in the comments.
The thing that bottlenecks Doom (like 60% of total time spent by the type checker) is serializing the multi-megabyte-type to a string, which I bet is going to be much faster than 10x – because it’s not something a typical typecheck has to do at all. But honestly, even if it’s just 10x (look at us… “just” 10x! what a dream today is!!!), that’s still gonna get it down to sub-1-day for the first frame, more than likely.
Another one in the long list of JavaScript tools that ditched JavaScript as their implementation language for performance reasons. Hopefully this is more easily usable by the time I have to work on a NodeJS project again, because the performance improvement numbers look incredibly promising.
This begs the question, why write Javascript on the server if Go is there.
Watching this industry choose js in a lot of places they don’t have to (i.e. anywhere but the browser) has been strange to see.
Single language stacks are awesome to work in. That’s why I write my frontends in Rust, but I understand TS devs going the other way.
It’s such a bad language and ecosystem. Typescript barely improves anything there.
My mind is still boggling at Microsoft rewriting their TypeScript app in Go for a 10x speedup yet anyone arguing that anyone should use TypeScript for anything (besides frontend) instead of using Go. And I think Go is an offensively bad language, there is no excuse for its error handling. But, same with JS (UnhandledPromiseRejections on the floor) so that’s at least a wash. On the other hand, the Bun folks are so talented and putting forth so much effort into the JS ecosystem that it’s becoming better, but I don’t know why they’re going through the effort for server-side js.
Ok, question asked. Why write JS on the server when I could pick Java/PHP/Elixir/Go/Rust/Python/Ruby/C#/Zig/OCAML/Crystal/Nim/Perl/Kotlin/Scala/Lua/Haskell/Clojure?
I think some people are aiming for a single language as a stack. Because JS seems to not be going away anytime soon and there are so many backend languages, people were/are trying to aim for JS on the server. There are many backend choices but only one frontend choice. Therefore, to get one language, and end-to-end types, JS on the server. Yes, I understand JS avoidance and all the arguments against. Yes, I rolled my eyes when the server was discovered again.
If I question why I have two languages in my app then people move the goal posts and reduce app features. “I can just concat html text to the client using app.pl in /cgi-bin”. Sure, you always have had that option, that’s not what I mean. I mean for a certain size/complexity of application. I mean, just as one benefit or pro in the trade-off, if I have Go types they don’t go to my client. Or I have to / want to have some contract layer to sync the two. So I end up with two languages and some contract between them. In theory, you don’t have that with trpc/typedjson/tanstack/etc etc. Because your types are full-stack.
So when people are talking about Go replacing Typescript this is still Typescript dev. It’s a tool written in Go to write/check/build Typescript. If you wanted to avoid NodeJS, you would have to look at things like Deno or Bun.
There are many backend choices but only one frontend choice.
With so many languages taking on WASM targets, I think that’s becoming less true. And even before that, there are quite a number of compilers targeting JS. Of course there are drawbacks, and these approaches aren’t always practical for every web front-end project, but I do think “only one frontend choice” is overstating the case.
Fair enough. Currently wasm still goes through JS for DOM access – see “There are future plans …”. As a dev, you probably don’t code against JS so maybe JS is still technically involved but not really the point (although still one language only in the frontend currently – for DOM, what I consider to be the majority of apps, I guess I should have been clear).
The real gate or friction is, you still need to know about web apis. So, maybe people have avoided learning web APIs exclusively because of javascript? But I don’t think so. Maybe I’m wrong and there will be an explosion of backend-focused people making user interfaces. Hopefully not at the UX of “it works”.
I think wasm is fine/nice/great for tight loops, canvas use cases. I don’t see it for general stuff that also requires CSS but I guess we’ll see. Is wasm is an enabler then there should be lots of sites made in wasm. These sites should look good and work well for users if it’s easy to do and aligns with their backend job role and backend interests.
I just put together an internal mini-app using Marimo, which leans on Pyodide to export a folder full of static assets. I was impressed at how simple and smooth it was. I didn’t need to touch CSS or HTML, but the affordances are there. I wouldn’t try to build a high-traffic commercial front-end this way (yet!), but it was an eye-opening experience.
I think the reality is that most people are asking that same question and coming to that same conclusion. The theory of “frontend devs can own the API layer” hasn’t really played out as well as people had hoped and I know plenty of JS developers who are just as happy to write Go if it comes to it anyways.
Echoing and consolidating what others have said.
The c# author chose go for this project rather than c#. It’s not because go is “better” but it’s better for tsc
. He said that c# is fundamentally a bytecode first language and he doesn’t consider it’s aot capabilities to be that mature. So essentially he thought binaries built by go would be a better deployment target due to better (in his opinion not mine) cross compilation support. He also mentioned that he considers low level memory/struct layout to be easier to do in go and presumably he wanted that in this project.
In a way go won because it was a syntactically similar natively compiled garbage collected language with the best cross compilation story.
Rust seems to have been ruled out because it either would have required reworking the object graph to comply with rust ownership rules (believed to be harder than line by line transliteration) or they would have had to GC<Box<T>>
all the things. I presume they ruled that out because it was ugly and painful to work with but I don’t know if that’s true.
Despite how good javascript jits can be, they don’t shine at the polymorphic graph analysis compilers do. If everything was being manipulated as a big byte array, v8 would probably have been more competitive.
So the tl;Dr is that ultimately golang was closest 1:1 transpile target that met their needs.
Maybe this will bring the idea of “TypeScript tooling should run the typechecker before doing anything else” (like linting, running tests, etc.). It feels very backwards that you can make a bundle with vite or whatever without the code being in a compilable state.
I mean, most bundlers or compilers don’t automatically run a linter or force all the tests to pass before compiling the code. It’s surprisingly convenient being able to separate type checking from whether the code runs or not. It makes it very easy to make a change locally and test it out, without having to update the entire codebase based on the change you’ve made. In most cases, you need to do compilation to get your code to run in the first place, but Typescript is lucky that type checking can be decoupled in this way.
Obviously everything needs to type check before ending up in production, but this is the same as tests, linting, and so on.
I love the comments from their GitHub discussion on “Why Go?”.
One of the top language engineers in the world makes a decision on which language to use.
Randos on GitHub:
It’s even worse: not just “one of the top language engineers in the world” but specifically architect of both the language being ported and architect of the language they’re saying it should be ported to.
The fucking presumption, the unmitigated arrogance of some people, woof.
So, in fact, Go is a sort semantically faster JS (given the duckily typed interfaces and reflection) if compared to C/Rust on a large rather conservative (no evals, method_missing and friends) codebase.
Hang on… by “native” they mean “in Go”? Interesting choice!
I imagine it’s easier to port their codebase when they don’t have to deal with adding memory management. Concerns like ownership and reference cycles can affect the way you design the data structures, and make it difficult to preserve an architecture that wasn’t designed that way.
I’m kind of hoping that as a result of this project they come up with a TS-to-Go transpiler. That would make TS close to being my dream language.
Hejlsberg addresses the question of a native TypeScript runtime towards the end of this interview https://youtu.be/10qowKUW82U?t=3280s
He talks a bit about the difficulties of JavaScript’s object model. If you implement that model according to the JavaScript spec in a simple manner in Golang, the performance will be terrible. There are a lot of JS performance tricks that depend on being able to JIT.
What might be amusing is an extra-fancy-types frontend for Golang, that adds TypeScript features to Golang that the TypeScript developers want to use when writing tsgo.
he also mentions in there about the syntax-level ts-to-go transpiler they wrote, I don’t know the timestamp though
I’m surprised that a group inside Microsoft that’s presumably led by the creator of C# (author of the post) chose Go. Not because I think C# would have been better for a TypeScript compiler, but because I would have guessed C# AOT would have been the default (even if just for intellectual property reasons) and they had good reasons to use something else.
Did they prefer Go’s tooling? Was it Go’s (presumably) smaller runtime? Maybe just Go is more mature for AOT (since it was always AOT)?
I was slightly surprised as well. They may have been influenced by esbuild (see Evan Wallace’s summary on why he went with Go over Rust). They may even be reusing some code from or integrating with esbuild in some way, though it doesn’t seem likely to me. My personal preference would lean toward Rust for something like this but I can see why they’d use a native GC’d language.
I think if they had the experience and time, they may have chose Rust, but they wanted to deliver something in under a year with no knowledge for performance gains now.
Yeah, given the constraints, prior art, and outcome, sounds like a choice that worked out well. And I’m not one to look a gift horse in the mouth—I’m very pleased to see an official native TS compiler.
Is this the first big Go project at Microsoft? I had no idea they could choose Go, I would assume that it just wouldn’t be acceptable, because they wouldn’t want to unnecessarily legitimize a Google project.
Their web browser runs on Chromium, I think it’s a little late to avoid legitimizing Google projects
They had already lost the web platform by the time they switched to Chromium.
When I think of MS and ‘developers developers developers develop’, I was lead to believe that they’d have more pride in their own development platforms.
I wonder if the switch to Chromium, the embrace of Linux in Azure, the move from VS to VSCode, any of these, would have happened under Ballmer or Gates. I suspect Gates’ new book is like Merkel’s: retrospective but avoiding the most controversial questions: Russian gas and giving up dogfooding. Am I foolish to expect a more critical introspection?
I think corporate open source is now “proven”, and companies don’t worry so much about this kind of thing.
Google has used TypeScript for quite awhile now (even though they had their own Closure compiler, which was not staffed consistently, internally)
All of these companies are best thought of as “frenemies” :) They cooperate in some ways, and compete in others.
They have many common interests, and collaborate on say lobbying the US govt
When Google was small, and Microsoft was big, the story was different. They even had different employee bases. But now they are basically peers, and employees go freely back and forth
Microsoft has actually maintained a security-hardened build of Go for a while now: https://devblogs.microsoft.com/go/
I can see a blog post saying they made changes for FIPS compliance, which is usually understood to be for satisfying government requirements and not a security improvement. I can’t see a summary of any other changes.
MS is a pretty big company, so I wouldn’t be surprised if they have a ton of internal services in Go. Plus with Azure being a big part of their business, I doubt they care what language people use as long as you deploy it on their platforms.
Can confirm that some internal services were written in go as of a couple years ago. (I worked for Microsoft at the time.) I didn’t have a wide enough view to know if it was “a ton,” though.
I’m too lazy to look up a source but some of the error messages I’ve seen in azure make it clear Microsoft uses go to build their cloud.
Message like this one:
dial tcp 192.168.1.100:3000: connect: timeout
Keep in mind Microsoft also maintains their own go fork for fips compliance. I don’t know exactly their use case but I can tell you at Google we did the same thing for the same reason. I think the main fips compliant go binaries being shipped were probably gke/k8s related.
(Edit removed off topic ramble, sorry I have ADHD)
dial tcp 192.168.1.100:3000: connect: timeout
Could this message just be coming from Kubernetes?
You’re right. I might have ran across this in a kubernetes context since it was 6-7 years ago. I did a quick search just barely and all I saw other than kubernetes was that some azure binaries like azcopy
use golang.
if its not possible to rewrite it in typescript in a way that is as fast as the go version, that really sheds a bad light on v8 and its performance and begs the question why you should write any non browser code in typescript.
if its not possible to rewrite it in go in a way that is as fast as the assembly version, that really sheds a bad light on go and its performance and begs the question why you should write any code in go.
I’m pretty sure it’s not possible to rewrite it in assembly in a way that makes it 2 times faster than the go version. also go code is as easy to write as typescript code.
software development is about finding a sweet spot and between different languages that are about as easy to write 10 times performance difference is pretty meaningful imo
As discussed at https://youtu.be/10qowKUW82U?t=2772 one of the main limitations of v8/javascript is that you cannot share object across threads. You also don’t have any control over the memory layout of objects.
If you are in a compute-heavy domain those are going to be a real bummer, but if you are in an io-heavy domain that doesn’t see as much benefit from sharing then you’ll be plenty fast.
This story is similar to python, and also why lots of nodejs and python projects see great utility in linking out to compiled languages for various tasks.
I wonder what would have happened if they had made the CLI into a daemon. A major issue with JS for CLIs is that the JIT is giving you almost nothing. This is sort of a worst case scenario and isn’t necessarily representative of, say, server performance. That said, I expect Go to basically stomp all over JS even there, just pointing out that this is truly worst-case stuff for JS.