Why is Zig so Cool?
53 points by alurm
53 points by alurm
Zig resemblance to interpreted languages is quite striking, particularly with its concept of compile time execution, unfortunately not stressed enough in this article.
Definitely not stressed enough! To me that's the coolest part, but everything else listed is also awesome.
It is difficult to overstate the usefulness of compile-time metaprogramming in a systems language. Modern C++ kind of fell into it by accident, though cleaned up nicely by C++20. With Zig it is a first-class intentional feature.
I can't live without it. Generating context-optimal composable code automagically that is thoroughly checked at compile-time and run-time saves an enormous amount of code writing. It greatly increases safety too, which is at least as valuable as the more commonly discussed performance and optimization benefits. More than anything else, the inability to do this in Rust limits my usage of the language.
It is possible to go overboard. You are running an interpreter at compile-time. There are many clever examples of unreasonably complex apps being run inside the compiler using C++.
I'm currently making libc for a portable application platform for ease of porting existing applications to it, and seeing c++ runtime needing to call over 400+ functions from __init_array to get working runtime execution is quite something. On the other hand zig having fairly light runtime and every static initialization being comptime is really nice.
Though easily the best parts of zig is the build system and the whole toolchain itself. It combining compiler, stdlibs etc just makes development experience so much better and allows better whole program optimizations. Zig building new tools to unify the current native toolchain mess is refreshing.
Compile time execution in Zig was a big draw for me to write a project in Zig once. I used compile time to compute the attack arrays for all the chess pieces. This'd be very quick in runtime, but this really slowed down compile time a lot. Zig was fairly interesting though overall, hopefully it gets better.
From looking at the open issues/milestones, the goal appears to be to get compile-time execution to run roughly at the speed of the equivalent python (run using cpython). That should be plenty quick enough to compile the attack arrays when the time comes.
What advantage does this approach have over generating those arrays with a helper program and using it as static data?
I think the attack tables end up being hundreds of KBs, that'd be a fairly big source file I think.
I have hand-written source files bigger than that lol.... but even if so, wouldn't this be an argument to do it separately so it generates faster in the first place and can be cached between builds so it doesn't have to be regenerated again so often?
You could generate binary data, and include it with @embedFile. If you make the structs that represent the attack pieces @extern, then they would have defined representation, and you could simply cast it. This would be fairly efficient:
const attack_binary = @embedFile("attack_moves_generated");
pub const attacks: *[attack_binary.len / @sizeOf(AttackMove)]Attack = @ptrCast(attack_binary);
Where attack_moves_generated is built by a helper program and configured as an importable module by the build system.
EDIT: I think the above might break because of alignment; this should work instead:
const attack_binary alignas(@alignOf(Attack)) = @embedFile("attack_moves_generated").*;
pub const attacks: *[attack_binary.len / @sizeOf(AttackMove)]Attack = @ptrCast(&attack_binary);
My own personal feeling is that cool projects are what make a language and language ecosystem seem cool
In this regard, Zig has a lot going for it with projects like Ghostty and Bun