Last week at Develop, I heard Doug Binks from Intel deliver a talk on dynamic resolution rendering. It’s a talk that I heard was well received at GDC 2011 in San Francisco in February, but I didn’t attend that conference so this was my first chance to hear it. As promised in my post yesterday accompanying the short video interview I did with Doug, here’s a more in-depth look at the concept.
The basic idea behind the presentation was simple. Instead of rendering a game at the same resolution throughout, we can change the resolution dynamically to strike a balance between performance and quality. When the player is moving quickly through a 3D scene, for example, the resolution can be dropped significantly without the player noticing. When the player is stood still, admiring the surroundings, that might be a good time to increase the rendering quality at the expense of rendering speed.
On the PC platform, the first thing gamers often have to do is to select the screen resolution. “I don’t believe that’s necessary any more,” said Binks. There are four reasons to use dynamic resolution rendering:
- Firstly, to make sure the user interface is as clear as possible. The text needs to be rendered at the resolution of the screen. Sometimes you might want to step down the 3D scene quality so the user can focus on the user interface in the foreground, especially where the interface is extremely busy, as in World of Warcraft.
- Secondly, you can use dynamic resolution rendering to hit your performance objective. Games are becoming increasingly pixelbound, said Binks, with games spending up to 30% of their time in post processing. By decreasing resolution, performance can be improved.
- Thirdly, quality is a key motivation. You can’t guarantee what hardware a PC game will run on, which means the quality can be unpredictable. Using dynamic resolution rendering, you can make adjustments at runtime to ensure the gameplay is fluid and smooth. When performance is sufficient, you can use anti-aliasing and super-sampling to improve quality.
- Finally, as mobile gaming becomes predominant, it’s important to optimise power consumption, especially when running off the battery. Performance settings on the hardware might throttle the processor frequency, which means games need to be able to adapt to that if they are to remain playable.
To show the concept in action, Intel has created a demonstration, which you can now download here. The demo scene is quite complex, with over a million polygons in most of the scenes. Binks said that games with lots of postprocessing and expensive particle physics will probably see better results than the demo. As we saw at Develop, the demo provides a nice playground for experimenting with the concepts, though, including the effects of motion blur, temporal antialiasing (where odd and even frames are offset slightly to increase the number of pixels available), and supersampling (where the render target is larger than the screen and the resolution is scaled to match the performance criteria).
It’s also possible to use complex resolution schemes, such as keeping a laser beam pin-sharp, but when the scene explodes and the fill rate goes up, using dynamic rendering to achieve the desired performance.
You can sometimes have on-screen artifacts as a result of dynamic resolution rendering. A small change in the frame rate can lead to small changes in static parts of the image, which was particularly apparent in the edges of buildings set against the sky, which seemed to slightly change. That said, in the artificial environment of a demonstration, these things are always more pronounced and will rarely be distracting in gameplay, in my view. It’s worth keeping an eye out, though, to make sure any glitches are included by choice and not accident.
Binks believes that dynamic resolution rendering is the natural solution to the problem of having PCs with different specifications out there. The gameplayer can have a better experience and optimal performance, and the player could even perhaps choose the frame rate they want in the game.
What do you think about this idea? Will it become the default in game designs, or are there significant barriers to adoption? You can read the slides from the presentation here, find Doug Binks here, and are always welcome to leave me a comment below.