oops, cubic macro
30 points by polywolf
30 points by polywolf
Another fun example:
macro_rules! m { ($($tt:tt)*) => { m!($($tt)*, $($tt)*); }; }
m!();
This hangs rustc (eats infinite RAM without tripping recursion depth check), but rust-analyzer has to deal with it, as these kinds of pathologies happen all the time for code that is being written.
It seems like it's easy enough to deal with stuff like that by adding "fuel", but the problem is then that the amount of "fuel" left makes incremental compilation ineffective! I think, for macro expansion, we ended up with limiting both depth and width of macros?
It seems like it's easy enough to deal with stuff like that by adding "fuel", but the problem is then that the amount of "fuel" left makes incremental compilation ineffective!
Yes, that is an annoying problem! I had the same problem with limiting nesting depth in Typst while keeping incremental compilation deterministic and effective.
The solution implemented in Typst uses an opaque type that can only answer the question "am I within X depth". When going one level deeper, we create a new instance of the type that points to the existing opaque instance but asks it "are you within X - 1 depth". And when X becomes negative before the root instance (in what is basically a linked list) is hit, the check fails. This is slightly simplified (e.g. there is some caching to avoid repeated list traversals), but that's the gist of it.
Memoized functions only ever receive the opaque tracked type and when validating a cache hit, the memoization system checks this opaque type against the observed depth checks.
This system has some overhead (more than I would wish for to basically just count up ^^), but it's working quite well at least.
Author here :) I think this is the same as https://github.com/rust-lang/rust/issues/95698, which I linked to in the post. I need to figure out how rust-analyzer handles it.
FYI, this "fuel" concept is already a concern for type checking. I recently got a first-hand explanation of the architecture of the new trait solver. Rust enforces a recursion limit (which IIRC you can modify, if you enable a nightly feature flag) for the trait solver, and its caches (which are distinct from rustc's query system, btw) track the "fuel" used to evaluate a cached answer. It will correctly detect that you exceed the recursion limit even if it uses cached answers along the way (there is a lot of special handling involved because hitting the recursion limit does not necessarily fail compilation). So your idea isn't entirely unfavorable to incremental compilation...
On a related note: I have a list of less-than-plausible ideas for optimizing macro expansion further, which I'm trying to avoid thinking about until I can quantitatively measure macro expansion overhead. One of the ideas on there is to basically JIT everything (after a certain number of steps). You could detect recursive macros and inline them into themselves the way LLVM sometimes does with recursive functions. It sounds like so much fun :D