Game creation is one thing, but AMD's CEO believes that AI is going to be increasingly used by developers to get games onto your screen without necessarily rendering everything.
This is kind of the opposite of that idea though. This is saying that not everything put on the screen needs to be computed from the game engine. Some of the content on the screen can be inferred from a predictive model. What remains to be seen is if that requires less computing power from the GPU.
Yes, but with DLSS we’re adding ML models to the mix where each one has been trained on different aspects:
Interpreting between frames
For instance, normally you might get 30FPS, but between the frames the ML model has an idea of what everything should look like (based off of what it has been trained on), so it can insert additional frames to boost your framerate up to 60FPS or more.
Upscaling (making the picture larger) - the CPU and other hardware can do work on a smaller resolution which makes their job easier, while the ML model here has been trained on how to make the image larger while filling in the correct pixels so that everything still looks good.
Optical Flow -
This ML model has been trained in motion which objects/pixels go where so that better prediction of frame generation can be achieved.
Not only that but Nvidia can update us with the latest ML models that have been trained on specific game titles using their driver updates.
While each of these could be accomplished with older techniques, I think the results we’re already seeing speak for themselves.
Edit: added some sources below and fixed up optical flow description.
Techniques ro only render what is on screen has been a thing for decades.
This is kind of the opposite of that idea though. This is saying that not everything put on the screen needs to be computed from the game engine. Some of the content on the screen can be inferred from a predictive model. What remains to be seen is if that requires less computing power from the GPU.
Yes, but with DLSS we’re adding ML models to the mix where each one has been trained on different aspects:
Interpreting between frames
For instance, normally you might get 30FPS, but between the frames the ML model has an idea of what everything should look like (based off of what it has been trained on), so it can insert additional frames to boost your framerate up to 60FPS or more.
Upscaling (making the picture larger) - the CPU and other hardware can do work on a smaller resolution which makes their job easier, while the ML model here has been trained on how to make the image larger while filling in the correct pixels so that everything still looks good.
Optical Flow -
This ML model has been trained in motion which objects/pixels go where so that better prediction of frame generation can be achieved.
Not only that but Nvidia can update us with the latest ML models that have been trained on specific game titles using their driver updates.
While each of these could be accomplished with older techniques, I think the results we’re already seeing speak for themselves.
Edit: added some sources below and fixed up optical flow description.
https://www.digitaltrends.com/computing/everything-you-need-to-know-about-nvidias-rtx-dlss-technology/
https://www.youtube.com/watch?v=pSiczcJgY1s
Yes, a new approach to the same concept.
No, rendering at a smaller resolution and upscaling is not the same concept as only rendering what will end up in frame.
It has yes, however the techniques Carmack used in Doom’s engine probably don’t have much of an impact on something like Cyberpunk 2077.
The exact techniques, maybe not. But the fundamental approach of only rendering what you see has been continued since then.
Right, so what is the point in bringing it up?
“Sony just released a new 150 megapixel mirrorless digital camera!”
“Cameras have been a thing since the 1800’s…”