You need to be aware of this in Windows programming if you have data you need on the GPU. If the GPU is given something to do and doesn't come back in a few seconds then they reset the GPU (like shut the power off and turn it back on, or rather send a PCI reset). Microsoft realized this and so Microsoft built into the OS a timeout. There is no interrupting them, saving state, switching to something else like with a CPU. Because they are not pre-emptable if you give them something to do they will do it until finished. Most GPUs are not pre-emptable (maybe that's changing?). They have the worst graphics stack when it comes to stuff like this. The benchmark specs on the GPU.js page describe a desktop system.īlame Apple. I don't imagine using GPU.js on a phone, my personal assumption is that GPU.js is mostly meant for developers playing around on desktops right now. You're right that the power draw of a GPU isn't trivial, but the watts per megaflop is lower than CPU, for well-crafted highly parallel compute applications, even on mobile devices. It's neat that they implemented in javascript a way to cross compile javascript to a shader.
The cool part about GPU.js is the GPU access with transparent CPU fallback - you get to write GPU-CPU agnostic code in javascript.
That's not to mention that it's just fun in the spirit of hacking to figure out how to use your systems in weird ways. It's useful for physics sims, video editing, and image processing. A really obvious one for HN would be neural network traing and/or inference, or any other optimization technique. There are several listed here in the comments and on the GPU.js site, e.g., ray tracing, matrix multiplication. Most people here would agree frameworks and ads and analytics are crazy.
I guess there's two different topics in there.įirst, I agree completely that the web has insane amounts of bloat relative to the size of the content you request. In short, it has only gotten better over time, it won’t be long before a webpage really can’t crash the OS just by using the GPU. And aside from crashing sometimes, browsers have become very careful about GPU sandboxing for privacy. GPU resource management is still a bit more raw than what the OS & CPU have, but it’s steadily improving every year. The main thing that’s “wrong” is complexity, but it’s also here to stay, there’s no going back, there’s just tightening up the sandboxes and hardening the APIs. There also shouldn’t be ways to crash the internet or a large scale redundant network service like the ones Google & Amazon have set up with a single microservice or database corruption or flaky cache or router, yet cascading failures are happening all the time and getting long post analysis write-ups featured on HN, even. It’s harder to crash the OS, but it does happen. There are still easy ways to lock a browser up with CPU code too. You’re right, but it’s not by design, it’s just a little buggy and a lot complicated.