To be a better programmer, write little proofs in your head (2025)
44 points by mt
44 points by mt
Strong +1. Couple of points I'd add:
Ideally you would use a language that supports the specification of such variants, invariants, pre- and post-conditions as part of the language, but alas... When the performance hit is negligible, I manually implement it right in the code (for example, guaranteeing the termination of a loop inside bootloader code).
It’s funny you should mention that because the Curry-Howard isomorphism essentially states that programming languages and formal logic are quite literally two sides of the same coin. Most programmers are familiar with only one half of the coin because most programming languages don’t fully support one half or the other.
I’ve really been leaning hard into Lean lately and am quite impressed with how cleanly it unifies both concepts… how it’s both a functional programming language and also a state of the art theorem prover. It really feels like the programming language of the future.
unifies both concepts
All of the main proof assistants in use today maintain a clear distinction between propositions and types.
Curry-Howard is not necessary for a computer implementation of logic.
Read more at Propositions as types: explained (and debunked).
All of the main proof assistants in use today maintain a clear distinction between propositions and types.
For what it's worth, Agda and Idris don't make this distinction. Also, Lean and Coq distinguish between Prop and Type, but I have hard time believing that this violates Curry-Howard; both are still types (in the general sense).
Read more at Propositions as types: explained (and debunked).
I'm familiar with this space, but I'm having a hard time understanding what this article is articulating. The magic about Curry-Howard is that there's a bijection: I can take any programming language and extract a logic, and I can take any logic and extract a programming language. To me, this has totally shifted my perspective on what programming is, and the parent article really resonates with how I approach programming these days. This article you linked seems to be illuminating differences between classical logic and functional programming, but classical logic is simply not the extracted logic for functional programming? But maybe I'm just missing the point.
Concerning classical logic and programming, there is some really interesting work here. This was a talk (recording) that I really loved from POPL 25 that explored the lambda-calculus + continuations as an isomorphism to classical logic. Wadler has some similar thoughts in this paper.
Agda and Idris don't make this distinction
When reading around this discussion (trying to understand it better) I found https://agda.readthedocs.io/en/latest/language/prop.html which I gather is saying that Agda has added a Set/Prop distinction similar to Rocq, in order to support definitional irrelevance of proofs that Paulson discusses.
As I understand it, Idris aims to be a programming language not a proof assistant. Idris 2 has Type:Type which is unsound but tolerable for its purposes. Idris 2 uses quantitative type theory to make it clear what gets erased before runtime; I dunno if irrelevance of proofs does anything similar or if they are entirely unrelated.
Oh nice, I didn't know about Agda's --prop option! And yeah, I wouldn't consider Idris a proof assistant either, but its core calculus is similar to Agda's (aside from Type:Type) so I thought it was worth mentioning.
I think that quantitive types and proof irrelevancy are similar, but unrelated. For example, consider the Idris2 function f : (n : Nat) -> (0 p : Even n) -> Nat that takes the the proof p into consideration but doesn't compute with it. I don't think you can just swap out f n p for some other f n q without proving that p = q. You could do this in a proof irrelevant system, because it doesn't matter how you proved p or q, just that it's proven. I think that the "0" modality only restricts computation, not relevancy, but I hope someone can provide some more insight because I'm not totally sure I have this right.
Anyway, all this to say, I'm not sure how problems of proof irrelevancy refutes things about Curry-Howard. I'm not sure I understood the argument in Paulson's article.
It’s funny you should mention that because the Curry-Howard isomorphism essentially states that programming languages and formal logic are quite literally two sides of the same coin.
There is a category error here, in the context of the blog post that you are responding to. (What you say can be interpreted as something valid and relevant in other contexts.)
The context of the blog post is that you write a program and then want to reason about that program. This is completely different from seeing the program itself as a proof (of some other property that does not refer to the program), which is what the Curry-Howard correspondence suggests.
Reasoning about the program we write, informally or formally, is commonplace and can be done with any programming language. (The tricky question is to figure out which reasoning steps are in fact valid; formally we call this establishing a "program logic". But for informal reasoning you reason just based on intuition, and it is already quite helpful.)
In constrast, using the Curry-Howard correspondence productively generally requires a very specific programming language whose type system is consistent as a logic, this is a completely different activity.
A few sections I liked:
I have a long-held conviction that a lot of the "craft" of software is (or ought to be) centered around modifying or augmenting an existing system without destabilizing anything. When making modifications to a codebase, it can be exceptionally useful to know how to prove that behaviors you didn't intend to change were, in fact, left unchanged.
...every change has a "blast radius" - a change to one part of the code may necessitate a change to another part to ensure the consistency/correctness of the whole system.
To connect to Peter Naur's Programming as Theory Building,
The programmer having the theory of the program is able to respond constructively to any demand for a modification of the program so as to support the affairs of the world in a new manner.
This means that understanding the "blast radius" of a change to any part of the program requires having an adequate "theory of the program" represented in your understanding of the software.
I really like Naur's paper, but I don't see a too strong connection on this exact part.
I think putting up firewalls/limiting blast radius is.. a more syntactic property? Like if I have a pure function than I can be sure that if I observe the same output value for each input, I'm fine with the change, I can think only on a local limited scope.
If on the other hand I have some complex object graph with everything mutating everything, I will have to reach and look for the end of this change's blast radius, but even without much understanding I can do that by some code navigation (I wish there were better tooling for this!). It's the inverse that requires good code understanding, designing the software so that it has frequent firewalls
I was explicitly taught to do almost all of this in an intro CS course at CMU in 1984 or so, and yeah, it does work. I find that loop invariants are a particularly useful tool (induction is even better for getting a handle on recursion, but how often does one do "real" recursion?). Also, turning pre- and post-conditions into asserts is easy and serves as both documentation and bug-catching.
Anyway, I guess schools don't teach this anymore?