Why we need lisp machines
34 points by veqq
34 points by veqq
With lisp machines, we can cut out the complicated multi-language, multi library mess from the stack, eliminate memory leaks and questions of type safety, binary exploits, and millions of lines of sheer complexity that clog up modern computers.
Citation needed.
Also, Lisp isn’t one language … so you’re going to have “complicated systems” that are a mix of Scheme, Common Lisp, Clojure, Racket, Kernel, and what have you
Within Scheme, you’ll have Guile Scheme, femtolisp (a scheme used to bootstrap Julia), etc.
Lisp isn’t one language
‘Lisp’ without qualifications means Common Lisp, the standardised inheritor of the Lisp tradition.
It definitely has advantages when it comes to binary exploits, memory safety and dynamic type safety. I don’t know that it really helps with memory leaks (it might make these more common), multiple libraries or the sheer complexity of too much code (it has better abstraction capabilities than popular languages, so maybe).
Citation needed.
So, yeah. There’s a ton of code in Debian. How does moving to a LISP machine fix that? Why would the LISP machine not have millions of lines of code? How would that eliminate memory leaks, type safety, exploits? That is the citation needed. These are big, sales-pitch-esque claims with little to back them up.
How does moving to a LISP machine fix that?
I’d suggest starting here:
https://www.loper-os.org/?p=69
And if you’re interested, reading the whole blog.
“Lisp is bad at ensuring your project doesn’t rely on a single critical person” seems like a point against it to me. The model of a heroic hacker who shoulders all the burdens is unhealthy both in corporate and non-corporate settings.
Also, for that matter, doesn’t the argument in that article only apply in a world where Lisp programming knowledge is rare? If we all switched to Lisp machines and it became the standard language(s) for development then Lisp developers become fungible.
The “bedrock” article in particular is really relevant to the topic of Lisp machines.
The development of clever compilers has allowed machine architectures to remain braindead. In fact, every generation of improvement in compiler technology has resulted in increasingly more braindead architectures, with bedrock abstraction levels ever less suited to human habitation.
Dare to imagine a proper computer - one having an instruction set isomorphic to a modern high-level programming language. Such a machine would never dump the programmer (or user) by surprise into a sea of writhing guts the way today’s broken technologies do. Dare to imagine a computer where your ideas do not have to turn into incomprehensible soup before they can be set in motion; where there is, in fact, no soup of any kind present in the system at all. It would be a joy to behold.
I posit that a truly comprehensible programming environment - one forever and by design devoid of dark corners and mysterious, voodoo-encouraging subtle malfunctions - must obey this rule: the programmer is expected to inhabit the bedrock abstraction level. And thus, the latter must be habitable.
As much as I am a big proponent of look beyond UNIX, I must say, the quip about how “Lisp is still #1 for key algorithmic techniques such as recursion and condescension.” has held up unfortunately well over the years.
UNIX was designed for a world that no longer exists, but so were the Lisp machines. Just to point out a few things, global address space is a no-go in a spectre world, CPUs have moved on far from the point at which you could microcode them to run Lisp at acceptable performance, being able to change system functions during runtine is a security nightmare and most importantly we now have 50 years of real software we need to actually be able to run. I think lisp machones are a wonderful artifact or history well worth learning about, but good ideas tend to return in new incantations on their own, but I’m afraid nothing today looks like a lisp machine for good reasons.
Thank you for putting into succinct words my own reaction to the article.
I think for some people, it’s important to be clued in to the secret Right Way that history overlooked… whether it’s Lisp Machines or, in the other direction, Plan 9 (and a religious obsession with The Unix Philosophy). I’m definitely susceptible to that need, and I’ve valorized a lot of technological branch lines over the years, often based on the same level of understanding as the author: I read some accounts by the technology’s biggest boosters and accepted them uncritically.
Every programmer ever dreamed how great it would be if everyone used one (his favorite) programming language.
Yes, it would be nice, simple, it would eliminate many IT problems… but it will never happen. Heterogenous IT systems are the reality, the norm. You rather make all Americans use metric system than this.
Every programmer ever dreamed how great it would be if everyone used one (his favorite) programming language.
I don’t think that’s true. I dream of using a language that is fast enough, dynamic enough, and safe enough that it could seamlessly work at all levels. C is fast enough, not dynamic, not safe, but it does work at all levels. Python isn’t fast enough, and barely safe enough, I barely trust the CPython interpreter to be able to handle stuff. Go isn’t dynamic enough, and frankly I don’t those programmers are ambitious enough. Rust could fit the bill.
But Lisp, specifically on LISP Machines met all of those criteria, and apparently it was fantastic.
You rather make all Americans use metric system than this.
You say that like it’s a bad thing.
Having played with restored symbolics machines, they’re certainly cool, but they’re not as magical as they’re made out to be. Their code is messy, their configuration is arcane, and they’re slow.
I’m not at all convinced that, had they won out, their ecosystem wouldn’t be just as messy as UNIX’s.
While this is an exciting thought experiment, I’ve never actually used a LISP machine. If you have, mind sharing all the ways LISP machines suck? I’d love a counter-balance to this.
If you don’t like Emacs you won’t like Lisp machines :-)
Lisp of that era tends to be organized round image-based development, where you mutate the system in place. It’s very interactive, but ironically for a functional-ish language it’s a very non-functional style of building a system.
I feel like there’s a major dichotomy between functional language ecosystems, with lisp-y stuff tending towards the “image-based development” (or more cynically, write-only languages), a lot of emphasis on mutating systems in place… and then you get the ML-style “we can have something performant and expressive and correct”, with more of an emphasis on writing systems that, in a word, don’t break.
It’s really interesting that these two get placed together.
Image-based development seemed to be a popular thing back then. Smalltalk also had it.
To anyone who’s curious about what that’s like, I suggest spending some time inside a Smalltalk environment. Squeak or Cuis are relatively modern and easy to run. They are the heirs of this style. The fact that they don’t go all the way down to the metal, but run on a VM that’s outside the system, is just a pragmatic implementation strategy.
People often miss the point of Smalltalk, in my opinion. Sure, it’s a language like any other textual programming language. But it’s also a “system”, as in an entire computing environment: An OS and IDE and “applications” all sharing code inside a live image. Dramatically more accessible and dramatically less code overall. Very mind-opening, when coming from a Unix mentality.
mind sharing all the ways LISP machines suck?
If you have not read The UNIX-HATERS Handbook [PDF] then you really really should.
It will explain a great deal.
Re the title: the case explains a lot. It is a handy book compiled from a mailing list called UNIX-HATERS. It is not a handbook on how to hate Unix.
They were replaced because general purpose CPUs caught up on speed, then exceeded the speed of LISP machines, and became cheaper at the same time. It’s similar to what happened with Connection Machine—the first models were bespoke CPUs, with later ones being based around existing CPU architectures (SPARC if I recall correctly).
Everything worked in a single address space, programs could talk to each other in ways operating systems of today couldn’t dream of.
Everything in a single address space is not actually a design goal of any system I know today. Much of the the history of computing has been providing ways of isolating pieces of the system: process boundaries, user permissions, Linux kernel namespaces, CHERI…
Inside those namespaces, the big things that come to mind are the ability to hook and redirect calls and to call back and forth between languages across an interface. Unix systems are very poor at this, yes. If you want to make this comparison, it should be to Windows, where hooking and redirecting code is a standard thing and there’s been a huge amount of effort on actual interop and interfaces (COM and .NET).
With lisp machines, we can cut out the complicated multi-language, multi library mess from the stack, eliminate memory leaks and questions of type safety, binary exploits, and millions of lines of sheer complexity that clog up modern computers.
It sounds like what you really want is the Midori project that Microsoft cancelled. Multi-language, multi-library is a fact of life, and any system without a plan to handle that is a toy. Memory leaks, type safety, and binary exploits are more a fact of the inadequate languages like C used to implement so much of the software and the hardware with little support for isolation.
and proper windowing GUI’s.
I think anyone trying a Lisp machine GUI or any of the really early systems like an Alto will find that this is a romanticization.
Windows, where hooking and redirecting code is a standard thing
Most of the examples I have read about describe this kind of thing being used for malicious or subversive hacking. I don’t think externally inserted hooks are necessary for cross-language interop, but they are useful for debugging or reverse engineering. Unfortunately debug interfaces tend to be massive back doors, and because Windows doesn’t have effective sub-user security partitions, free-for-all hooking has turned into a mess.
Trad unix in theory isn’t much better at sub-user security, but its hooking mechanisms are easier per-process and hard to apply system-wide or session-wide, so they don’t get used much by legit software or abused much by malicious software.
For cross-language and cross-process interop, I would like something more like AppleScript, where programs have RPC endpoints that can be used for scripting or accessibility. Less injecting adversarial code, more co-operating processes.
I don’t think externally inserted hooks are necessary for cross-language interop
No, but the advice system in Lisp was one of the big means of interacting between hunks of software. That’s what I was referring to, but I should have been more clear.
Everything in a single address space is not actually a design goal of any system I know today. Much of the the history of computing has been providing ways of isolating pieces of the system: process boundaries, user permissions, Linux kernel namespaces, CHERI…
Keep in mind that Lisp is a high level language that doesn’t let you just use pointers left and right. You’d only be able to access something if you received it somehow, or if it’s globally accessible. Which is basically what CHERI gives you, just with language enforcement rather than hardware enforcement.
As @david_chisnall likes to say, isolation is easy, safe sharing is hard. Separate memory spaces is an easy way to isolate things, but it doesn’t do anything for safely sharing things. I know a few people who are interested in single address space OSs that rely entirely on CHERI to isolate things. Then safely sharing something is as easy as handing over a capability / pointer.
TL;DR UNIX isn’t good enough anymore and it’s getting worse. We need a new system […] a lisp machine.
While it’s really hard to argue that UNIX/Linux being used and developed during decades is “getting worse” in some way, I fail to see why lisp machines are advertized as new beginning.
The only argument I can think of after re-reading the article is somewhat separate: the Emacs’ long lifespan and relatively quality database may be a sign of lesser degradation of huge lisp codebases. Otherwise any new approach would be good (and most of them soon dead) for “exploring new ideas in new ways”. Have I missed any other valid points of the article?
While it’s really hard to argue that UNIX […] is “getting worse” in some way,
Is it? This made me very surprised.
You do not see that it’s getting worse over the decades?
It is getting worse in terms of being ever bigger, ever more complicated, with more options in more components with more config files in more formats in more places.
So, there are tools to automate the config. But this being Unix, there are lots of them. Some need agents, some don’t. (Puppet, Chef, etc.) Some need a particular programming language, some don’t. (Ansible, Salt, etc.) Some need their own additional config file format, some don’t. (Anyone for Yaml?) Some sit on existing packaging systems, some replace it. (Nix, Guix, Spack, etc.) Some work by deploying virtual machines, some don’t. (OpenStack, OpenShift, Landscape, etc.) Some use containers instead, some don’t. And which format of container? (Docker, OCI, runc, Incus, LXD, Zones, Jails?) Some blend containers and VMs, some don’t. (LXD, Incus, Firecracker, Lambda, Serverless, Kata Containers, etc.)
Some replace the entire packaging system and directory hierarchy, and make you use a new one. Their creators claim this solves the whole problem. Only this being Unix, there’s a choice of packaging description language. Some use a unique one, some use an existing one. (Nix, Guix.) They are unaware of rival efforts that fix other problems. (Spack, Gobo.)
Some are new packaging systems that sit on top of existing ones on local user-facing kit. (Flatpak, Snap, AppImage, 0install.)
The result is OSes that take tens of gigs, need hundreds of megs of updates every single day, some have multiple release families with separate “stable” and “testing” and “unstable” or “rolling release” models. Some are just one, or the other. If they are only stable then a 3rd party will make a rolling release from the testing pipeline (siduction) or they’ll do it themselves (Tumbleweed).
Users just want to install something and have it work for a decade or two then replace the computer, as we all did in the 1970s, 1980s, and early 1990s. But that entire quarter to a third of a century is now discarded and forgotten as if it were a dream.
Give developers a fat pipe for downloads and the result is software that is never finished, constantly updated, and like cancer is constantly metastasizing into a thousand new variants, none of which do the job very well.
You haven’t noticed this? You think how it is now is just fine and dandy?
Wow.
What you’re describing are social problems first and technical problems a (distant) second. What about a lisp machine would cause people to not engage in the endless NIH-isms that lead to mass amounts of redundant tools? Why would a lisp machine cause developers to stop the constant updating of their software (rather than do as NASA did and remotely patch running code rather than merely code at rest)?
The answer is nothing. Nothing about a lisp machine solves those social problems. In fact, I’d wager that if we magically had a competitive lisp machine architecture tomorrow the first thing written would be a WINE for *nix so we could bring all the old shit with us.
it’s really hard to argue that UNIX is “getting worse” in some way
I think I understand your opinion and agree to some point. But IMHO UNIX humongousness is just a result of popularity and wide speading, generality tax, also fee for freedom to code your own app. And yes, Unix and C are the ultimate computer viruses — but it’s really hard to live in the ecosystem without viruses.
I disagree. If I may say so this strikes me as the same argument that Windows is so very prone to malware because Windows is so popular.
On the one hand, this is a bad argument because we can’t falsify it without a time machine or dimensional portal to other universes where Windows is a thing but not so popular. Falsification is how scientific debate works. It matters.
On the other hand, the argument ignores actual real-world evidence: Windows treats anything with particular filenames as executable, or as other content and so automatically tries to process it with the relevant handler. That’s why we have compromised image files, and music files, and help files and so on. This is an aspect of the OS’s design which the OS does not share with all other OSes, so it is a legitimate target for examination and dis⁄€cussion.
Linux – and let’s be honest here, this isn’t about AIX or Solaris or any (or all) of the BSDs, it’s specifically about Linux – is not all that popular. It is a rounding error on the desktop. It’s big on servers, yes, but then again, server management is moving towards deploying custom-built VMs and containers on demand, and then destroying them again afterwards – and that, I submit, is a direct consequence of the difficulty of maintaining and upgrading a machine post-install. It’s easier to blow away an instance and build and deploy a new one than it is to fix it, or to update it, and this should not be the case.
But also considering Linux, we can’t dismiss this as a server issue. Sure it’s a few percent of general-purpose desktop OSes, but there are billions of Android devices and they don’t have these problems. They have other totally different problems of their own, partly due to the Arm market, but they don’t inherit the issues of Linux on other platforms, which shows us that these problems spring from the existing market on those platforms and how vendors handle the platforms.
Then again, there are millions of Steam Decks out there, and hundreds of millions of Chromebooks, and they are Linux boxes too. ChromeOS solves the app-format problem: no apps. Easy. There’s an Android runtime on branded Chromebooks but I don’t own one. ChromeOS Flex just has Linux containers, but that works too. My wife’s COS Flex laptop has VLC on it for watching films and it Just Works™.
If a problem can be fixed so that it just does not affect one Linux platform (say, phone Linuxes, or dedicated-web-client laptops) then it shows it can be fixed and that means that the other platforms could fix it if someone exerted some control.
Always remember: the concept of meritocracy was coined satirically. It was not proposed as a good idea. It was a pejorative, a term of condemnation of a bad idea.
It is not a good way to run anything much, especially not a software industry.
(I (do (not (want (to (learn (lisp )))))))
yes { its( so( different( from { everything( else()) } ))) }
I want the world to resurrect Dylan and make this (parentheses (problem)) go away.
If John McCarthy had wanted us to write s-expressions, he wouldn’t have invented m-expressions.
The fact that some people happily can do something does not mean everyone can do it, or that everyone should be forced to learn to do it, or failing that, use other tools that don’t require that talent.
I like some features of Lisp, but I find its syntax harder to read.
Of course, habit and previous experience plays some role. But I am still leaning towards the idea that {}[]()
helps better reading and understanding the program logic at the first sight than ()()()
.
Your username is what it is but the real place where lisp falls apart is in requiring parentheses for giving things names.
x = f(1, 2)
y = g(3, 4)
z = h(5, 6)
return calculate_thing(x, y, z)
vs
(let ((x (f 1 2))
(y (g 3 4))
(z (h 5 6)))
(calculate_thing x y z))
As an amateur programmer, I find Lisp the easiest language to read. Just look at how delightfully the let form limits the scope of its temporary variables.
Yeah. The bind macro can really simplify complicated let-forms, but its most basic form is no different from the basic let-form you show. You get used to it.
I could imagine a simpler let form that knows its arguments come in pairs of symbol, val-form, followed by body-form, but I’m thinking it would have to be pretty rigid in what it accepts.
(my-let x (f 1 2)
y (g 3 4)
z (h 5 6)
(calculate-thing x y z))
On the other hand, if you want to use the simplest possible let form, you could do:
(let (x y z)
(setq x (f 1 2))
(setq y (g 3 4))
(setq z (h 5 6))
(calculate-thing x y z))
Which is bad style, but as far as I know, not wrong.
For whatever it’s worth, some Lisps (e.g. Clojure and Janet) use one fewer level of delimiters and they use a different character for it:
(let [x (f 1 2)
y (g 3 4)
z (h 5 6)]
(calculate_thing x y z))
The willingness to use []
and {}
means that these languages are a bit closer to C, JavaScript, etc. in terms of being able to match the delimiters with your eyes.
FWIW I modified a lisp to read a JSON based syntax. I built a lowcode UI in React and needed a syntax. Being able to annotate functions with metadata for the UI is pretty slick if I do say so myself.
I’ve been trying out readable lately to try and reduce my dependence on parinfer just to see how far I can get. There’s something about the regularity of prefix notation that I like, but I totally get not wanting to be locked into a structural editor to have a ergonomic editing experience.
Lisper’s are famously not finishers though so we got the technically interesting half of the project and a bunch of false starts at making the frontend ergonomic lol