Left to Right Programming
67 points by epidemian
67 points by epidemian
I loved this article. It puts into words something i’ve felt many times about programming languages.
I think Python is one of the most egregious offenders of this idea. Not being able of just type things left to right makes its REPL experience unnecessarily frustrating.
An example i remember stumbling upon a few times: i was doing some data processing directly on the REPL. Iteratively, i built some long expression that i hoped would give me the processed items i wanted. But when evaluating it, the output was too long. How many items where there? Well, i had to press ↑
to go to the just-executed expression, ctrl+a
to go to the beginning of the line, type “len(”, then ctrl+e
to go to the end of the line, type “)”, hit enter and voila, the number of items.
It’s not a big deal, i know. But this kind of thing leaves a bad taste in my mouth when i know that in other similar dynamic languages like Ruby (or JavaScript, or many other i suppose) the same thing would be much easier: press ↑
and then type “.size” + enter. Nice and simple.
(An alternative way of doing the same thing is evaluating “len()” (or “.size” in Ruby), and that would require less jumping around, but maybe i didn’t know that _
trick at the time, IDK.)
I think these kind of details matter a lot in the experience of programming. And i’m glad i now know a way to call this principle :)
I think what you say is different, and often more important, than what the original article says.
I like shell pipelines exactly for the reason of writing and debugging them left to right with reduced editing and rollback annoyances, even though suggesting possible further operations does not work that well with the shell pipeline model (well, you can autocomplete option names).
Oh, i got the formatting borked on that parenthesized sentence because of the underscores, and cannot edit anymore. I meant: evaluating len(_)
(Python) or _.size
(Ruby).
Changing the language is one way to go about it, changing the tools would be another
How would you change the tools there? In ~all editors and REPLs I’ve used, you type code from left to right. I suppose a structured editor like paredit could help, but that feels like an overly complicated solution for a simple problem.
edit: i had my monitor turned 180 degrees
In ~all editors and REPLs I’ve used, you type code from left to right.
In rust-analyzer, it has hints that allow you to wrap your line in something like Some
or Ok
. I figure you could add this feature to LSPs for Python so that .len
when accepted wraps the expression in len(expr)
.
I don’t deny that every past system has suffered from the same problem. It seems to me that while changing tools requires a plan to control complexity, changing every programming language just isn’t possible.
As ~k749gtnc9l3w says, that’s a little different but also valuable. Concatenative languages like Factor excel in this respect, but may not win the completion wars.
But Python’s REPL does have a convenience for this: _
represents the last-returned object. So after getting your long result, you can enter len(_)
Yeah, i tried mentioning that on that parenthesized sentence, but got the Markdown formatting wrong (the underscores got interpreted as italics) and didn’t realize it.
With concatenative languages and direct REPL one needs to be careful about the order of arguments, with the most «complicated» one being the deepest (so that you can figure it out, and then quickly add the arguments for the next significant operation on top).
If you use _ all the time, you have significant limitations on refining: you really want to get the next operation good enough to continue from the first attempt.
Suppose you have a
FILE *file
and you want to get it’s contents. Ideally, you’d be able to typefile.
and see a list of every function that is primarily concerned with files. From there you could pickread
and get on with your day.Instead, you must know that functions releated to
FILE *
tend to start withf
, and when you typef
the best your editor can do is show you all functions ever written that start with anf
. From there you can eventually findfread
, but you have no confidence that it was the best choice. Maybe there was a more efficientread_lines
function that does exactly what you want, but you’ll never discover it by accident.
There’s also the module-oriented approach, taken by OCaml, Odin, Roc, and I believe Elixir, too: file-related functions are always qualified with a module name File
. In OCaml style, the file type is File.t
, and the operation to read bytes from a file is File.read
.
This is enough to make file-related functions more discoverable, because you can always find them in the File
module. However it’s not enough for writing code left-to-right. Consider this:
List.len (String.split_words input)
There’s still nesting. You start by typing out the String.split_lines
, then the List.len
. It gets nice only when you add a pipeline operator:
String.split_words input
| List.len
Then, if you type in len
after the |
pipeline symbol, the IDE is free to suggest any functions you can use, from any module with a function len
whose first argument is a list, guiding you into the pit of success.
The advantage of this approach is the added openness of implementation: list-related functions don’t have to be tied to the list type. You can define any function in any module, and the syntax for calling it will not be any different from that of functions in the baseline List
module. It is more wordy, yes, but I like that it’s more conceptually simple than having a magical self
parameter.
another cool thing odin has(well, ols, the language server), is that it provides snippets that allow you to write methods that expand to a regular function call when you accept the completion
e.g. you have a variable file
, you write file.re
, a snippet “read” pops up in the autocomplete menu, and when you accept the snippet, your code turns into read(file, ...)
This made me think of GObject-flavored C, where functions start with the name of the struct type they operate on (mangled according to some conventions).
This is one of the reasons I love Clojure. The consistency of parameter placement in the core library (and down into community libraries) makes it easy to use the “threading” macros, changing the right-to-left nature of s-expressions to left to right pipeline style:
(-> some-obj
(assoc :foobar 1)
(update :foobar inc)
(select-keys [:a :b :foobar])
(->> coll
(map #(update % :foobar + 3))
(filter #(even? (:foobar %)))
(partition 2)
(take 5))
Your first example seems to miss a closing parenthesis, and that (interaction of the last paren with further additions at least in REPL) is why after defining pipeline macros in Common Lisp I still feel that specifically pipeline related REPL UX is better in Bash than in fully-matched parentheses syntax.
hah The joys of writing code comments on my phone! When I’m on my computer, I have tooling to keep everything matched, which avoids these issues.
My weakness is liking direct REPLs, not only mediated by an editor! (So some editing movements need keypresses and get annoying)
Instead, you must know that functions releated to
FILE *
tend to start withf
, and when you typef
the best your editor can do is show you all functions ever written that start with anf
. From there you can eventually findfread
, but you have no confidence that it was the best choice. Maybe there was a more efficientread_lines
function that does exactly what you want, but you’ll never discover it by accident.
I mean, yeah, autocomplete is no substitute for knowing your language and libraries. Are you expected to scroll through all the functions on file objects, or are you just hoping to accidentally stumble upon the better function?
In a more ideal language, you’d see that a close method exists while you’re typing
file.read
.
…how? They don’t even start with the same letter.
file.open
would be a better example, as maybe your editor could give you a hint to call close
. Maybe that’d be part of the docstring you see when it autocompletes. That’s doesn’t require it to be a method on an object, though.
I agree with the rest of the argument, though. I wonder what’d be the analogue for functions on multiple objects, the sort where OOP gets awkward. Maybe this is an argument for RPN? Forth would seem to be a perfect fit for advanced autocompletion, as you already know all the arguments when you start typing the function name. Hoogle has proven how powerful this could be.
…wow, I think this article has accidentally sold me on Forth.
I’m not sure if this will sell you or not sell you on Factor’s approach which would be kind of Forth-like, but here’s a few ways to write it:
https://re.factorcode.org/2025/08/left-to-right.html
With some useful abstractions, this becomes fairly readable:
[ [ { [ abs 1 3 between? ] [ sgn ] } 1&& ] all-same? ] count
In a more ideal language, you’d see that a close method exists while you’re typing file.read.
…how? They don’t even start with the same letter.
On languages with subject.attribute
syntax with decent language support you’d get a list of all possible methods for file
when typing file.
.
The author calls this a “more ideal” language, but i think it’s a pretty basic functionality TBH.
This was in the context of
Are you expected to scroll through all the functions on file objects, or are you just hoping to accidentally stumble upon the better function?
You wouldn’t see this unless file objects don’t have too many methods defined on them.
I have thought about this a lot recently. A bit strange that RPN has a big fan club & less error prone according to users, but most languages do infix or prefix notation instead of postfix—even stack-based Uiua reads right to left like APL instead of other stack-based languages.
Because everyone is mirroring the math education! Which follows notation choices that have been made under noticeably different constraints and optimisation pressures, but oh well.
Forth would seem to be a perfect fit for advanced autocompletion, as you already know all the arguments when you start typing the function name.
I’m an enthusiastic Factor fan, but a complication here is that the editor won’t know how many arguments you want to consume at this point
Well you could ask what are the possible argument counts compatible with the top of the stack, and for each of them you need to check the deeper values.
I suspect that the best completion and the best left-to-right REPL refinement might conflict over the argument order though.
Yeah, I’m thinking the best completion strategy for a concatenative language is probably using the module approach others mention.
Even if you don’t use the module names in the language itself, the editor/language-server could let you browse the functions of any module/vocabulary when you type the module name, and ultimately complete just the function name.
Then you can just add arbitrary tags to defined names, and do some kind of fragment matching over them and names! … while not forgetting to filter by current applicability given the state of stack, of course.
This is one of the things I’m trying to get right in my programming language. My current plan is to allow calling top-level functions as methods as long as the first argument type matches (methods can already be called as functions). With the function argument types always annotated (only local type inference in this language) my hope is that most of the time the types will be specific enough to allow auto-completing and listing relevant functions in the IDEs while typing left-to-right.
I also have methods, but methods can be called as normal functions and normal functions can be called as methods. For example, given these definitions: (simplified)
type File: ...
File.open(path: Str) File: ...
File.close(self): ...
You can do:
let file = File.open("foo.txt")
let file = "foo.txt".open() # calls `File.open`
File.close(file)
file.close() # calls `File.close`
This is what D calls “uniform function call syntax” https://dlang.org/spec/function.html#pseudo-member
Yep, I even call this “UFCS-like” in the linked issue. (I call it “UFCS-like” because I’m not sure if it’s 100% the same as D’s UFCS)
This is one of the big downsides of Python’s embrace of duck typing.
Instead of iterable.map(...)
we need to type map(iterable, ...)
.
Instead of sequence.len()
we need to type len(sequence)`.
Instead of iterable.join(" ")
we need to type " ".join(iterable)
.
Duck typing allows for generic utilities that work based on behavior instead of based on types, but it makes a lot of the code look a bit backward.
How is this specific to duck typing though? Ruby’s heavily duck-typed and it’s all written like you have on the left there.
Ruby uses class inheritance and modules to include functionality into objects. E.g., you include Enumerable
into your class and you get a ton of useful collection methods inherited from that module. The upside of this is that the objects themselves become powerful and discoverable. You can type obj.
and have auto-completion on editors/REPLs (the REPL auto-completion is very interesting, as it’s based on interface of the live object, not static analysis of the code). But the downside is that the objects can become bloated very fast (see ActiveRecord models for an extreme example of this).
Python, in contrast, relies on much “thinner” interfaces and duck-typing. For example, if you have a custom collection type, it doesn’t need to extend from any base class or interface: as long as it behaves like an iterable, you can pass it to map()
, filter()
, ",".join()
or any other function that expects an iterable.
In short: in Ruby you can call collection.map { ... }
because the collection includes Enumerable
, in Python you can call map(collection, ...)
because the collection quacks like an iterable.
If Python methods returned self by default, autocomplete would work.
Instead None is returned by default, which makes method chaining impossible.
This article encodes a kind of bias that it doesn’t spell out, namely that the values are the basic units of management in code. A flexible language like Haskell can let you pretend that is the case by allowing for a sort of pipeline operator (the pipeline operator &
is in the standard library, but needs to be imported from the Data.Function
module):
wordsOnLines = text & lines & map words
wordLengths = text & words & map length
result =
diffs
& filter ((||) <$> all (> 0) <*> all (< 0))
& filter (all (inRange (1,3) . abs))
& length
All of these would have completion like one would expect. After typing wordsOnLines = text &
the editor can suggest all the functions that apply to strings, including lines
which breaks a string up into lines. When we get to wordsOnLines = text & lines &
the editor can suggest map
as one of the functions that take a list of things.
But are the values really the basic units of management in code? Wouldn’t the functions be? Take the wordsOnLines
definition – is that really a list of list of strings, or is it a function that breaks a string into, well, words on lines? When I write it, I don’t immediately think about the text
variable that happens to lie around.
Instead, I think “What do we want from this function?” We already know this, so we encode this intention in the type signature:
wordsOnLines :: String -> [[String]]
Then we start typing the definition.
wordsOnLines :: String -> [[String]]
wordsOnLines = m
The language server can recognise that the only valid things here are those that return a list. Among them are map
, which we will pick because we want to get the words out of the lines at the end.
wordsOnLines :: String -> [[String]]
wordsOnLines = map w
Here again completion can be helpful because if the result is given by map words
we indeed would get a [[String]]
out.
wordsOnLines :: String -> [[String]]
wordsOnLines = map words
Then to get the argument into the right shape we need another operation to split the lines, i.e. to go from String
to [String]
.
wordsOnLines :: String -> [[String]]
wordsOnLines = map words . l
Again, completion can be useful, because there are only so many functions that fit that template.
wordsOnLines :: String -> [[String]]
wordsOnLines = map words . lines
And this is the entire function.
Is composing functions better than piping values? Depends! I suspect some people are wired to prefer composition, and others are wired to prefer pipelines.
I think the point is you want left-to-right writing and reading of the code because that’s more natural for many, as the many spoken languages are written and read that way. (Note: I’m aware that right-to-left languages exist)
Left-to-right (or right-to-left) without jumping around all the time is also easier to type.
So these two are both good:
diffs & filter (...) & filter (...) & length
diffs.filter(...).filter(...).length()
This is not good:
length(filter(filter(diffs, ...), ...))
Whether you combine functions or other things is not the point IMO.
(Note: I’m aware that right-to-left languages exist)
Then, to be more precise, it’s about not having to switch the order halfway through; I’d probably compose length(filter(filter(diffs, ...), ...))
middle out, which requires constantly moving from the end of the line back to the start to wrap the result in a new function call and then the end again to close the brackets.
Every time you go back to the start of the line, I suppose there’s nothing stopping the autocomplete from working if you type a space, go back a character, and manually activate the autocomplete again, but it’s just tedious to write, tedious to autocomplete, and tedious to read, too.
What I’m saying is
length(filter(filter(diffs, ...), ...))
is left-to-right if we focus on the functions instead of the flow of data. What is the output of the expression? A length. What is it a length of? A filter. What is it a filter of? etc. That’s left-to-right.
I’m not sure that’s how the ideal reading of this code would be though.
I think ideally I would read it as: “length of diffs filtered by … and …”, which in code would look like: length (diffs & filter (...) & filter (...))
. (or with $
instead of the parens if the operator associativities are right)
So maybe you want the flexibility in your language to chain a function on the right (foo & f
or foo.f()
) but also on the left (f foo
or f(foo)
) for the best reading.
I think the use of different types of tasks in the function-composition-first and data-flow-first examples in this conversation is telling.
Here, to read the pipeline style you need to form an idea of what is the shape of data computed by the reading position; length(filter(filter(…
is not really friendly to function-first reading because the argument defining what filtering is happening in the first filter is still somewhere to the right.
On the other hand, there are tasks and underlying APIs where each function is described in a «region» of text and there are natural ways to understand the part of the composition that you have already read.
I wonder if it might also be related to bottom-up and top-down tasks: there is search for good abstractions that are very widely applicable, there is search of best representation of the slightly bespoke data-shaping the current domain benefits from, and they are better expressed in different ways.
The unfortunate, programming-unfriendly order likes originates from math notations:
Wikipedia specifies:
The SETL programming language (1969) has a set formation construct which is similar to list comprehensions. E.g., this code prints all prime numbers from 2 to N:
This might be a precursor of today’s list comprehension in Haskell and Python.
The friendly alternative, like the pipe operator in OCaml, was introduced very late (4.01) val (|>) : 'a -> ('a -> 'b) -> 'b