Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Here's an example of Bevy WebGL vs Bevy WebGPU

I think a better comparison would be more representative of a real game scene, because modern graphics APIs is meant to optimize typical rendering loops and might even add more overhead to trivial test cases like bunnymark.

That said though, they're already comparable which seems great considering how little performance optimization WebGPU has received relative to WebGL (at the browser level). There are also some performance optimizations at the wasm binding level that might be noticeable for trivial benchmarks that haven't made it into Bevy yet, e.g., https://github.com/rustwasm/wasm-bindgen/issues/3468 (this applies much more to WebGPU than WebGL).

> They're 10k triangles and they're not overlapping... There are no textures per se. No passes except the main one, with a 1080p render texture. No microtriangles. And I bet the shader is less than 0.25 ALU.

I don't know your exact test case so I can't say for sure, but if there are writes happening per draw call or something then you might have problems like this. Either way your graphics driver should be receiving roughly the same commands as you would when you use Vulkan or DX12 natively or WebGL, so there might be something else going on if the performance is a lot worse than you'd expect.

There is some extra API call (draw, upload, pipeline switch, etc.) overhead because your browser execute graphics commands in a separate rendering process, so this might have a noticeable performance effect for large draw call counts. Batching would help a lot with that whether you're using WebGL or WebGPU.



> I think a better comparison would be more representative of a real game scene, because modern graphics APIs is meant to optimize typical rendering loops and might even add more overhead to trivial test cases like bunnymark.

I know, but that's the unique instance where I could find the same project compiled for both WebGL and WebGPU.

> Either way your graphics driver should be receiving roughly the same commands as you would when you use Vulkan or DX12 natively or WebGL, so there might be something else going

Yep, I know. I benchmarked my program with Nsight and calls are indeed native as you'd expect. I forced the Directx12 backend because the Vulkan and OpenGL ones are WAYYYY worse, they struggle even with 1000 triangles.

> That said though, they're already comparable which seems great considering how little performance optimization WebGPU has received relative to WebGL (at the browser level).

I agree. But the whole internet is marketing WebGPU as the faster thing right now, not in the future once it's optimized. The same happened with Vulkan but in reality it's a shitshow on mobile. :(

> There is some extra API call (draw, upload, pipeline switch, etc.) overhead because your browser execute graphics commands in a separate rendering process, so this might have a noticeable performance effect for large draw call counts. Batching would help a lot with that whether you're using WebGL or WebGPU.

Aha. That's kinda my point, though. It's "Slow" because it has more overhead, therefore, by default, I get less performance with more usage than I would with WebGL. Except this overhead seems to be in the native webgpu as well, not only in browsers. That's why I consider it way slower than, say, ANGLE, or a full game engine.

So, the problem after all is that by using WebGPU, I'm forced to optimize it to a point where I get less quality, more complexity and more GPU usage than if I were to use something else, due to the overhead itself. And chances are that the overhead is caused by the API itself being slow for some reason. In the future, that may change. But at the moment I ain't using it.


> It's "Slow" because it has more overhead, therefore, by default, I get less performance with more usage than I would with WebGL.

It really depends on how you're using it. If you're writing rendering code as if it's OpenGL (e.g., writes between draw calls) then the WebGPU performance might be comparable to WebGL or even slightly worse. If you render in a way to take advantage of how modern graphics APIs are structured (or OpenGL AZDO-style if you're more familiar), then it should perform better than WebGL for typical use cases.


The problem is that it's gonna be hard to use WebGPU in such cases, because when you go that "high" you usually require bindless resources, mesh shaders, raytracing, etc, and that would mean you're a game company so you'd end up using platform native APIs instead.

Meanwhile, for web, most web games are... uhhh, web games? Mobile-like? So, you usually aim for the best performance where every shader ALU, drawcall, vertex and driver overhead counts.

That said, I agree on your take. Things such as this (https://voxelchain.app/previewer/RayTracing.html) probably would run way worse in WebGL. So, I guess it's just a matter of what happens in the future and WebGPU is getting ready for that! I hope that in 10 years I can have at least PBR on mobiles without them burning.


Mobile is where WebGPU has the most extreme performance difference to WebGL / WebGL2.

I'm not convinced by any of these arguments about "knowing how to program in WebGPU". Graphics 101 benchmarks are the entire point of a GPU. Textures, 32bit data buffers, vertices, its all the same computational fundamentals and literally the same hardware.


> I'm not convinced by any of these arguments about "knowing how to program in WebGPU". Graphics 101 benchmarks are the entire point of a GPU.

You're totally right that it's the same hardware, but idiomatic use of the API can still affect performance pretty drastically.

Historically OpenGL and DX11 drivers would try to detect certain patterns and fast path them. Modern graphics APIs (WebGPU, Vulkan, DX12, Metal) make these concepts explicit to give developers finer grained control without needing a lot of the fast path heuristics. The downside is that it's easy to write a renderer targeting a modern graphics API that ends up being slower than the equivalent OpenGL/DX11 code, because it's up to the developer to make sure they're on the fast path instead of driver shenanigans. This was the experience with many engines that ported from OpenGL to Vulkan or DX11 to DX12: performance was roughly the same or worse until they changed their architecture to better align with Vulkan.

Simple graphics benchmarks aren't a great indicator for relative performance of graphics APIs for real use cases. As an extreme example, rendering "hello triangle" for Vulkan vs. OpenGL isn't representative of a real use case, but I've seen plenty of people measure this.


Mostly because their drivers suck, and don't get updates.

Android 10 made Vulkan required, because between Android 7 and 10, most vendors didn't care, given its optional status.

Android 15 is moving into OpenGL on top of Vulkan, because yet again, most vendors don't care.

The only ones that care are Google with their Pixel phones (duh), and Samsung on their flagship phones.

There is also the issue that by being NDK only, and not having managed bindings available, only game engine developers care about Vulkan on Android.

Everyone else, devs would still have better luck targeting OpenGL ES than Vulkan, given the APIs and driver quality, which isn't a surprise that now Google is trying to push for a WebGPU subset on top of OpenGL ES.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: