No Graphics API
79 points by ignaloidas
79 points by ignaloidas
Impressive post, so many details. I could only understand some parts of it, but I think this article will probably be a reference for future graphics API.
I think it's fair to say that for most gamers, Vulkan/DX12 hasn't really been a net positive, the PSO problem affected many popular games and while Vulkan has been trying to improve, WebGPU is tricky as it has is roots on the first versions of Vulkan.
Perhaps it was a bad idea to go all in to a low level API that exposes many details when the hardware underneath is evolving so fast. Maybe CUDA, as the post says in some places, with its more generic computing support is the right way after all.
Edit: in some ways this reminds me of MIPS, which exposed way too much detail, like branch delay slots, and at the beginning was thought as something good but later as hardware evolved some things were just unergonomic. Later, RISC-V learn from some of those mistakes
Perhaps it was a bad idea to go all in to a low level API that exposes many details when the hardware underneath is evolving so fast. Maybe CUDA, as the post says in some places, with its more generic computing support is the right way after all.
The reason for going all-in on a low level API was the fact that the graphics drivers were opaque piles of garbage. The people who wanted performance were killing themselves bypassing the device drivers and basically wanted access directly at the hardware level. To them, DX12 is mostly doing just fine and they couldn't care less about Metal or Vulkan.
The problem with the proposed API from the article is that the vast majority of computers are laptops and they need a lot of software support for these features. Intel's support for advanced graphics APIs on laptops is laughably bad and they are probably the most numerous graphics chips. The open source implementation on Linux is Mesa, and even Mesa isn't compliant with this kind of proposed API--even if it supports a "feature" it may have such ridiculously low limits such that it is effectively unimplemented.
Finally, at this point, a really huge problem is that the gaming market is now a loss leader for both AMD and NVIDIA. Any software resources spent on gaming are resources that could be earning 10x+ being deployed on ML/AI. Count yourself lucky that the DX12/Vulkan shift occurred so that the gaming studios can still shove pixels around even as the GPU manufacturers starve the consumer graphics driver teams.
How are laptops specifically making any of this harder? If anything, iGPUs basically guarantee that you can access all of "VRAM" with permanent mappings, while with (desktop) dGPUs you need to Make Sure™ resizable BARs are enabled.
even Mesa isn't compliant with this kind of proposed API--even if it supports a "feature" it may have such ridiculously low limits such that it is effectively unimplemented
What are you talking about?! The Vulkan extensions mentioned don't even really have numeric things that could be limited?
VK_EXT_descriptor_buffer
This 100% does, have garbagey, low limits and is exactly one of the extensions that I was thinking about.
% vulkaninfo ...
maxDescriptorBufferBindings = 3 maxResourceDescriptorBufferBindings = 1 maxSamplerDescriptorBufferBindings = 1
This is from a ThinkPad X1 Gen 11--not exactly an old or low end machine. Go take it up with Mesa over their implementation on Intel.
VK_EXT_host_image_copy
Flags as not implemented even on modern AMD drivers due to not supporting extended depths. At least this one is likely to get fixed soon.
VK_KHR_unified_image_layouts
Too little; too late. This extension effectively acknowledges that VkImages are useless. Everybody operates on buffers because you can put them in whatever form you need without the driver getting in your way. Oh, and you still can't take an address of a VkImage.
Everybody would have been better off having an extension that gives us the moral equivalent of replacing vkAcquireNextImage2KHR with vkAcquireNextBuffer2KHR so that we can operate directly on the buffer instead of farting with a VkImage.
Good article. When I wrote a prototype for a vulkan rendergraph a year ago I did a lot of work to follow the vulkan API with its resource management; dependency analysis, ahead initialization of as many things as possible – only to realize: it really does not matter. Just use up some little CPU time per frame, abstract everything at the stage of the proposed API at the end, and be happy. And hope there will be a better shader language than GLSL soon.
I'm more of an XDrawRectangle kind of person when it comes to graphics, and if I want to get really fancy with all these newfangled capabilities, I'll sometimes break out the glBegin/glEnd magic.
Then I read a thing called "modern opengl" that talks about new things in version 3.3. Then I see a post that says "ACTUALLY modern opengl, like 4.2, not the ancient stuff the popular 'modern opengl' websites talk about" and im just like golly. Meanwhile, I see some people saying opengl is yesterday's news, all the kids are now emigrating to Vulkan. Most illogical.
And now this post, saying Vulkan is pretty much obsolete and actually modern apis are nothing like the modern apis that replaced the modern apis, and certainly nothing like those other modern apis. Fascinating.
Well, golly, most this post was way over my head since I obviously don't keep up with these things (when it mentioned tiles in the mobile gpus i was like "oooo like the NES, I know this!" but yeah i don't know this), but I am getting the impression that we should just stop calling graphics apis "modern", it seems to have an awfully short applicable lifespan.
As the post discusses, Vulkan 1.0 was released in 2016. It's certainly newer than XDrawRectangle, but also 10 years is a pretty long time.
I'm about halfway through and learning a lot. My personal experience is limited to OpenGL 4 with bindless textures so it's cool to get a postmortem of both DX12 and Vulkan at the same time. Also: a mandatory link to the 1968 On the Design of Display Processors paper.