c-from-scratch: Learn to build safety-critical systems in C
11 points by felixyz
11 points by felixyz
TL;DR: smells to me like AI...
So, given the recent AI Zig book debacle, can we be reasonably sure this isn't AI-generated as well?
My AI prose spidey senses are tingling (based purely on vibes):
Yes, I was about to write a similar comment before reading yours. Also other details stand out, like the heavy usage of bullet points and italic font. Lesson 1 also reeks of it, e.g. 'And answers it correctly, always, under all conditions. That's it. One question. But answered with mathematical certainty.' This is a strong AI indication, as LLMs tend to spew these fillers out to not lose track of their task.
Yeah the first LESSON.md absolutely reeks of AI slop. Behold https://github.com/williamofai/c-from-scratch/blob/main/projects/pulse/lessons/01-the-problem/LESSON.md :
This is just spam.
I'm begging people, look, C can be fun, programming recreationally can be fun, but please, please, do not write safety critical systems in C. You are not as smart as you think you are.
Cool project but gives an interesting question: you can write safety-critical systems in C, but in 2026, is there any good reason to do so? I've been struggling with choice paralysis over this for months for a new project I want to work on and it's kind of like, why not just bite the Rust bullet, or even go a step further and learn Lean or something to prove the implementation as I'm writing it? What do you gain in 2026 from starting a safety-critical systems project in C?
While rustc has been qualified under some safety standards, it hasn’t for all of them.
I believe C++ has been for pretty much all of the places C has. But a bunch of these are incredibly silly.
For example, some things require MISRA C. MISRA C requires that your C program has no memory-safety bugs. How do you get there? Well, you run a bunch of static analysis things and hope. And when I run these things on a CHERI system and get traps from memory-safety violations I go and look at the code and I find the comment that turned off the analyser false positive. And then a bunch of other examples of the same comment where it really is a false positive and so developers are trained to do the same thing.
I've recently been looking at Verus with Rust and it's the first thing I've seen where I'd actually want to use safety-critical systems and I can use it in embedded contexts.
A lot of embedded systems are still written in C, for example: Maybe you bought a chip that only has a compiler for C, etc.
Give the SPARK subset of Ada a chance. Full contract-based programming, built for maximum readability, very battle-tested in the industry and certified for pretty much all critical applications. Rust has some upsides, but program correctness doesn't end with memory correctness. Ada's strong suit is it's type system, because you can pretty much express any type with it (and it has primitives for fixed-point numbers, ranged integers, etc.). My favourite example is that you can define a type in Ada that can only hold prime numbers
type Prime is new Positive with
Predicate => (for all Divisor in 2 .. Prime / 2 => Prime mod Divisor /= 0);
How cool is that?
(Honest curious question.) How is that significantly different from languages where you can control how values of a custom type are constructed? I could see that Ada maybe prevents any kind of fiddling to break the contract, which I'm not sure many languages do, but...
One example are NonZero types in Rust: https://doc.rust-lang.org/stable/std/num/struct.NonZero.html#layout-1. rustc needs a hint that the zero bit pattern is invalid and can be used for other purposes. A sufficiently smart compiler could recognize the invariant automatically, albeit not reliably.
Integer range types in Ada also allow to completely avoid array bound checks when the range is known to correspond to an array bounds.
Theoretically, it's also allows for very tight bit packing and memory optimization (e.g. combining bitfields or combining several enum ranges into one). I don't know if there are actual compilers that do this.
All of this can be done manually in Rust with get_unchecked or encapsulation and bit-twiddling, but it's unsafe and usually unergonomic, often working against the language. In the presence of additional information, compiler can do more safe optimizations.
I agree that the prime numbers example is cool, but feels a bit artificial, I don't see how any compiler can benefit from this invariant.
Could you please elaborate, as I don't understand your question? To keep it simple, how would you define such a prime number type in Rust?
Before we dive in, if you have a powerful type system, you avoid having to check constraints at the beginning of functions, as you often see in languages with weaker type systems.
I think this StackOverflow answer is correct. When you make a struct public, but the struct fields are not public, only the defining module and its submodules can construct arbitrary values. So you can ensure (manually) that all public functions that construct the type enforce any invariant.
Of course, this is not bulletproof- and Ada's solution might be bulletproof. But I think Rust (or I dunno, constructors in Java- which can be "broken" with reflection, IIRC) is "good enough" for most cases?
What you are describing seems like opaque types in Elm - you specify a type Prime that holds an Int inside, and as the library/module author make sure that the functions constructing that type run the validation (and presumably return Nothing/None if you try to construct Prime(4)), and those that modify it keep that invariant. So holding a Prime value "proves" this number has been checked (though the library author can still make a mistake).
I'm assuming SPARK runs these invariant checks before runtime instead of at runtime, and doesn't let you make these errors.
Ah yes, this is a good point! Julia has this too, where you can define so-called inner constructors for struct, which override the default constructor.
With Ada it goes a bit further, though, as you can set static and dynamic predicates on types. The aforementioned prime example is one for a dynamic predicate, but static predicates (e.g. bounds) can be used in contract programming and static verification. This way, you can prove things about your program outside of the runtime.
This is exactly what I wondered about: is this predicate checked at compile-time or run-time? Thanks for answering my question before asking ;-)
Is there a list of "awesome-safety-features" for SPARK that are NOT checked at run-time? Because I don't want my poor microcontroller to check for an overflow on every add instruction...
You can prove in SPARK that your code will not throw exceptions at runtime, and then you can disable quite a lot of checks. You can also disable type constraint checking, if you want, and then Ada is as fast as C.
Yep, that is const in Rust, C++, Zig... I think.
I think const is pretty weak in Rust as to the things that it can "prove", whereas if I'm not mistaken (I've only read about it), Zig can do a lot at compile time.
(I think the "theorem prover languages" such as Lean can do a lot more compile time validation.)
From my limited knowledge, Rust < Zig < Lean in how much you can validate before runtime. I wonder where Ada lies.
At least in aviation, there is no precedent of Rust in safety critical software subject to certification publicly known. But there is strong ambition :)
In the past, used to work as a Software Safety Engineer in relation to embedded graphics and GPGPU drivers (mobile, EVs etc). We were using a subset of C/C++ (through certified fork of Clang) for production code that was checked by humans and dedicated ML models trained to assure compliance with varying standards. Rust was used only for internal tooling like distributed version of Ninja that I’ve designed.
On a quick skim, it's fun to see that "always return DEAD" satisfies all of their requirements. :)