Note that Immediate Mode GUI wasn't conceived by Casey, haha. He's just a proponent of it. The idea of using procedural code to draw pixels on the screen has been around since the dawn of time.
"Using procedural code to draw pixels on the screen" is not immediate-mode UI, it's immediate-mode graphics. The earliest example I've been able to find of the immediate-mode UI API style is the little-known zmw (zero-memory widget) library which was released publicly about a year before Casey's immediate-mode UI presentation. Of course I don't know when Casey came up with the idea, only when he presented it, but the same ambiguity applies to zmw.
The ZMW macro in the code sample on that landing page is used to do a local fixed-point iteration loop for layout. Also, the UI pass is executed for each event (and 'draw' is its own event type) rather than the now-conventional IMGUI approach of maximal event coalescing (to the point of losing the time ordering between some event types) and processing all inputs in the same pass as drawing. So there's plenty of specific design differences and trade-offs, but that flexibility in the design space has always been there.
A lot of the early discussions on the Molly Rocket forums were people posting about their experiments with different IMGUI approaches, e.g. single-pass vs multi-pass, how much should be code-driven vs data-driven, different ways of managing persistent view state, and the best ways of integrating with existing retained-mode libraries. Aras P (NeARAZ) from Unity was also a poster there and that's how Unity's OnGUI system for writing editor UI ended up using the IMGUI approach.