
A source with knowledge of the matter gave us some early information about an "unspoken API," which we strongly infer is DirectX 12.

Darryl Linington from Notebookchect.net writes, "The backlash around Nvidia’s AI push and DLSS 5 has opened a broader question in game development. Beyond performance and image quality, veteran artists are now weighing what AI-driven rendering means for authorship and visual control. If a system can add or reinterpret detail after the fact, the issue is no longer just technical. It becomes a question of how much of the final image still belongs to the people who built it."
The latest GeForce driver introduces DLSS 4.5 Multi Frame Generation 5x and 6x alongside Dynamic Multi Frame Generation to RTX 50-series GPUs. The former increases the number of interpolated frames to 4 and 5 (between every two rendered frames), further reducing reliance on the CPU.
Big corp bowing down to another big corp is nothing more than helping each other. But try any games it doesn't work
I don't mind frame gen but only use it if I'm already >70fps without it. It is kinda nice but if I see any visual artifacts I will turn it off. Whenever I'm playing games on my 120Hz LG C3 I will almost never use it because frame rates >120fps look really bad. I think spatial super sampling is a far more interesting and beneficial tech than frame gen. Boosting 30fps to 60fps with framegen is just garbage.
Tvs were doing this 15 years ago with their telenovela effect... Idk how anyone can play with this on.
There is definitely input lag there and artifacts.
Frame gen just has too much latency and visual glitches for me, don't think I can ever use it for most games. I'd compare with it on and off and it's a world of difference in the feel. I need the very least input lag in my gaming. Companies should rely on actual optimization. As for potato hardware, I suppose it could have it's use.

WTMG's Jordan Hawes: "With the advent of NVIDIA's DLSS 5 tools, and the whole debacle surrounding AI usage in AAA gaming, is this new push an opportunity for smaller studios to showcase they are the ones vouching for artistic integrity in the gaming industry?"
They already are. Indie studios are the only developers that constantly strive to publish innovative and experimental experiences. There has been little to no art in AAA gaming, with just a few exceptions.
Indie-studios have been showcasing their creative superiority and bravery over AAA-studios/releases for a while now.
Personally. I have zero interest in AI slop in any of my entertainment, so regardless of what Sony, Ubisoft, MS, EA, etc believe the future is, I'm just not gonna touch any of that stuff.
One more thing in a long list of things that already give indie Games an advantage
In reality a dev having a simplistic tech statck does not really impact the end user experience. If the game is good and worth playing is what matters. In other words some cooks make care if 2 or 3 eggs were used to make a cake but the person eating it doesn't. And in the case of DLSS 5 the chef is soley responsible for the recipe and how its mixed together.
this is nice to know
Should be marked as a rumour really.
-The report is about an "unspoken API" that they assume is DirectX 12
-Even if the API allows it, nVidia have previously taken steps to prevent people using AMD and nVidia cards in conjunction (it was possible to use AMD primary card, with an nVidia slave for PhysX until nVidia put a stop to it).
This is great news. My only concern about about this is how well can the Multi-GPU run together in synchronization.
In the words of Hannah Montana, "It's the best of both worlds" Haha jokes aside If this rumour is true then Mantle should just go away forever. Being able to mix two cards is revolutionary. You could have one card that is good for heavy shadows and one that is good for dynamic lighting and textures and get the best of both worlds.
I have never tried anything like this and have no idea what the benefits would be. I do understand that there is more GPU power, but how do you combine the power to the same app, and then have them go to the same output? how does the app know to split the work?