All eyes are of course on AI, but with 192GB of VRAM I wonder if this or something like it could be good enough for high end production rendering. Pixar and co still use CPU clusters for all of their final frame rendering, even though the task is ostensibly a better fit for GPUs, mainly because their memory demands have usually been so far ahead of what even the biggest GPUs could offer.
Much like with AI, Nvidia has the software side of GPU production rendering locked down tight though so that's just as much of an uphill battle for AMD.
One missed opportunity from the game streaming bubble would be a 20-or-so player game where one big machine draws everything for everybody and streams it.
It would immediately prevent several classes of cheating. No more wallhacks or ESP.
Ironically the main type that'd still exist would be the vision-based external AI-powered target-highlighting and aim/fire assist.
The display is analysed and overlaid with helpful info (like enemies highlighted) and/or inputs are assisted (snap to visible enemies, and/or automatically pull trigger.)
Stuff like this is still of interest to me. There are some really compelling game ideas that only become possible once you look into modern HPC platforms and streaming.
My son and I have wargamed it a bit. The trouble is there is a huge box of tricks used in open world and other complex single player games for conserving RAM that compete with just having a huge amount of RAM and it is not so clear the huge SMP machine with a huge GPU really comes out ahead in terms of creating a revolution in gaming.
In the case of Stadia, however, failing to develop this was like a sports team not playing any home games. One way of thinking about the current crisis of the games industry and VR is that building 3-d worlds is too expensive and a major part of it is all the shoehorning tricks the industry depends on. Better hardware for games could be about lowering development cost as opposed to making fancier graphics but that tends to be a non-starter with companies whose core competence is getting 1000 highly-paid developers to struggle with difficult to use tools and the idea you could do the same with 10 ordinary developers is threatening to them.
I am thinking beyond the scale of any given machine and traditional game engine architectures.
I am thinking of an entire datacenter purpose-built to host a single game world, with edge locations handling the last mile of client-side prediction, viewport rendering, streaming and batching of input events.
We already have a lot of the conceptual architecture figured out in places like the NYSE and CBOE - Processing hundreds of millions of events in less than a second on a single CPU core against one synchronous view of some world. We can do this with insane reliability and precision day after day. Many of the technology requirements that emerge from the single instance WoW path approximate what we have already accomplished in other domains.
EVE online is more or less the closest to this so far, so it may be worth learning lessons from them (though I wouldn't suggest copying their approach: their stackless python behemoth codebase appears to contain many a horror). It's certainly a hard problem though, especially when you have a concentration of large numbers of players (which is inevitable when you create such a game world).
The question though is how you make something that complex and not have it be a horror though, and is stackless python really the culprit of the horror vs anything else they could have built it in.
I’d imagine ray tracing is a bit easier to paralize over lots of older cards. The computations aren’t as heavily linked and are more fault tolerant. So I doubt anyone is paying h100 style premiums
The computations are easily parallelized, sure, but the data feeding those computations isn't easily partitioned. Every parallel render node needs as much memory as a lone render node would, and GPUs typically have nowhere near enough for the highest of high end productions. Last I heard they were putting around 128GB to 256GB of RAM in their machines and that was a few years ago.
Pixar is paying a massive premium; they probably are using an order of magnitude or two more CPUs than they would if they could use GPUs. Using a hundred CPUs in place of a single H100 is a greater-than-h100 style premium.
They currently only use the GPU mode for quick iteration on relatively small slices of data though, and then switch back to CPU mode for the big renders.
It's probably implemented way differently, but I worry about the driver suitability. Gaming benchmarks at least perform substantially worse on AI accelerators than even many generations old GPUs, I wonder if this extends to custom graphics code too.
Much like with AI, Nvidia has the software side of GPU production rendering locked down tight though so that's just as much of an uphill battle for AMD.