Death Stranding PC: how next-gen AI upscaling beats native 4K

The concept of native resolution is becoming less and less relevant in the modern era of games and instead, image reconstruction techniques are coming to the fore. The idea here is remarkably simple: in the age of the 4K display, why expend so much GPU power in painting 8.3m pixels per frame when instead, processing power can be directed at higher quality pixels instead, interpolated up to an ultra HD output? Numerous techniques have been trialled, but Kojima Productions’ Death Stranding is an interesting example. On PS4 Pro, it features one of the best checkerboarding implementations we’ve seen. Meanwhile, on PC, we see a ‘next-gen’ image reconstruction technique – Nvidia’s DLSS – which delivers image quality better than native resolution rendering.

Checkerboard rendering as found in Death Stranding is non-standard and it’s the result of months of intensive work by Guerrilla Games during the production of Horizon Zero Dawn. Curiously, it does not use PS4 Pro’s bespoke checkerboarding hardware. Base resolution is 1920×2160 in a checkerboard configuration, with ‘missing pixels’ interpolated from the previous frame. Importantly, Decima does not sample a pixel from its centre, but from its corners over two frames. By combining these results over time in a specialised way similar to the game’s TAA and a very unique pass of FXAA, a 4K pixel grid is resolved and the perception of much higher resolution is achieved. According to presentations from Guerrilla, of the engine’s 33.3ms per-frame render budget, 1.8ms is spent on the checkerboard resolve.

Although it upsamples from a much lower resolution, DLSS works differently. These is no checkerboard, no pixel-sized holes to fill. Rather, it works more like accumulation temporal anti-aliasing, where multiple frames from the past are queued up and information from these frames is used to smooth lines and add detail into an image – but instead of adding detail to an image of the same resolution as TAA does, it generates a much higher output resolution instead. As a part of these frames from the past, motion vectors from those frames for every object and pixel on screen are integral for DLSS working properly. How all of this information is used to create the upscaled image is decided upon by an AI program running on the GPU, accelerated by the tensor cores in an RTX GPU. So while DLSS has fewer base pixels to work from, it has access a vast amount of compute power to help the reconstruction process.

Read more

Leave a Reply

Your email address will not be published.