Richard Feldman on new language adoption in the LLM age

17 points by jerodsanto


andyc

Hm interesting!

On Zig being well represented in Claude - I haven’t tried it, but yeah the models get better, so it doesn’t surprise me if it’s good now, and if it gets better.

I saw someone say “LLMs aren’t good at Rust”, but I’d expect that to get better too.


On porting code from one language to another – There’s an interesting tradeoff between reuse and duplication.

which is mentioned here too: https://crawshaw.io/blog/programming-with-llms

There was a programming movement some 25 years ago focused around the principle “don’t repeat yourself.” As is so often the case with short snappy principles taught to undergrads, it got taken too far.

….

The past 10-15 years has seen a far more tempered approach to writing code, with many programmers understanding it is better to reimplement a concept if the cost of sharing the implementation is higher than the cost of implementing and maintaining separate code.

In other words, LLMs make bespoke code cheap. I think this is right, but it will ALSO be taken too far!

As with most things, I view it as a combinatorial / scaling problem. The cost of ~100K or ~1M lines of LLM-translated code in an application is not zero … in terms of the humans who have to work with it

A future where say the Go, Zig, Rust, Swift, Roc ecosystems are constantly LLM-porting each other’s code back and forth seems a little silly. We’re going to take it too far, and probably swing back back to plain-old API design / language design / library reuse, which are hard things …


On the larger question about new programming languages …

One way to view programming language design is as a form of compression … [1] You’re trying to express some class of programs in a syntax and semantics that is easy for both a human and computer to understand

And I think that is going to survive, for fundamental reasons. LLMs don’t change the “human” part – if anything, they amplify its importance!

A thought experiment: Do you expect LLMs to generate raw assembly code, making programming languages obsolete?

I claim that if you do, then you have misunderstood what LLMs are … they are language wizards, and they LIKE using different programming languages.

So my intuition is that these three things are basically the same:

  1. the program that is most compressed
    • Consider using C versus Python for the same program - the C will be longer
  2. the program written in the most “appropriate” programming language
  3. the language that an LLM will create the fewest bugs in, with respect to a particular problem
    • this is the language they “like”, if you want to anthropomorphize

So if you believe that, then programming language design is still valuable, and there will be future progress in the field. (I also suspect that LLMs will “like” to develop their OWN DSLs / languages to solve problems, for the same fundamental reason of compression)

I’m not sure if anyone has written about this, but if so, I’d like to see it ! (or an argument against it)


[1] Similar to LLMs themselves being a form of compression: https://bellard.org/ts_zip/

FWIW, https://oils.pub/ is a bit hedged – LLMs are extremely good at writing OSH, because it’s the most bash-compatible shell - https://pages.oils.pub/spec-compat/2025-06-26/renamed-tmp/spec/compat/TOP.html

On the other hand, you can say that “YSH is for humans” … it has more power than bash, but it’s smaller and easier to remember (or will be, once we finish disallowing the bash features that have been generalized / replaced)

I just got word that one of our contributors replaced a ~2500 line bash script with a ~2000 line YSH script at work, with HUMAN coworkers who didn’t know YSH beforehand. So that is a good sign!