NVIDIA DLAA Review;games better on your PC

One of NVIDIA’s bets with its graphics cards has been to orient them beyond the video game market. If the commitment to computing paid off a few years ago and benefited PC video games, today it is the world of AI that is bringing benefits such as DLSS and especially its variant, DLAA.

When generating a 3D scene in real time, the graphics cards of our PC technically project a 3D scene to a screen composed of very tiny blocks called pixels, the consequence of which is that the diagonal lines are drawn as ladder blocks. , thereby causing the effect of saw teeth.

The easiest way to solve it is to increase the rendering resolution of the image, but this requires high power from the GPU and that is why Anti-Aliasing methods have been used since the beginning of time in order to reduce this graphic artifact that is highly annoying even today. So the different video games nowadays continue to use this type of technique to solve this problem.

The Temporal Anti-Aliasing

In order to understand DLAA, we must first understand how Temporal Anti-Aliasing or TAA works as it evolves from it. And how it works? Well, in a very similar way to the interpolation of textures, but only for the lines that suffer from the jagged problem. To do this, they look for the color values ​​of the nearby pixels and look for a transition to be created that makes the affected line look smoother and thus makes the saw teeth disappear, at least apparently.

The problem is that doing this only with the information of the current frame is not very precise and that is why the information of the previous frame is used, where a temporary buffer is used, which consists of giving an ID to each object on the screen that then it will help the GPU to know the speed and movement of each of them. You also need to be able to extract information from previous frames to perform the Anti-Aliasing process more accurately.

So Temporal Anti-Aliasing is the most efficient way to avoid saw teeth so far, but NVIDIA wanted to give it a twist with DLAA.

What is the NVIDIA DLAA?

As the name suggests, DLAA is Anti-Aliasing through deep learning, which makes use of the advanced capabilities for these algorithms provided by the Tensor Cores of the RTX 2000 and RTX 3000 gaming graphics cards.

The first advantage is the ability to recognize which pixels have changed from one frame to another, in such a way that the GPU wastes less time performing an algorithm equivalent to TAA. This translates into fewer milliseconds to generate a frame with the same quality and therefore a higher FPS rate in games. Something in which it is similar to DLSS , although it has its differences as we will see later.

But the biggest advantage of DLAA is the fact that being a deep learning algorithm it can be trained to see nuances in images with higher quality anti-aliasing. If in the DLSS we train the algorithm with higher resolution images, with the DLAA the GPU is trained with sawtooth removal techniques that the algorithm learns to observe and then apply via DLAA with a fraction of the necessary power.

DLAA derives from DLSS, but it is not the same

The big difference between DLSS and DLAA is that the latter is not designed to generate higher resolution images, but rather maintains the resolution with respect to the original sample and is based on improving the image quality of the same. At the moment the DLAA has not been applied in many games and is totally green, but not all games require increasing the resolution and for many users, image quality is preferable over resolution.

The question here would be: what do you prefer, more pixels or more “beautiful” pixels? Many games make use of image post-processing techniques that are based on taking the final buffer before sending it to the monitor and adding a series of filters and graphical techniques. The DLAA can learn from the existence of these and apply them to improve the appearance of the final image that we see on the monitor.

Today’s post-processing effects are performed in games through Compute Shaders, but deep learning algorithms have long been used in graphic design and video editing programs. Anti-Aliasing is a post-processing effect and therefore it is not surprising that NVIDIA has developed this technique.

DLAA requires training

Being a deep learning algorithm, the system has to learn a series of visual patterns from each game to make the inference and apply the DLAA correctly. Let’s not forget that each video game has its own visual style and the application of the same inference algorithm for all games can cause visual problems that are larger than those it can solve.

However, most games have a number of common visual problems that the DLAA could solve by learning to locate and fix them. In that case, the algorithm would not learn to copy the visual aspect, but to correct said inherited errors by the use of certain graphic techniques, this being one of the advantages of the training.

The second advantage is the enormous computing power of the Tensor Cores, which is almost an order of magnitude compared to the ALU SIMD or CUDA cores, so the speed at which these types of algorithms are solved is very fast and as we have said before the idea is to achieve the highest image quality and frame rate at the same time.

by Abdullah Sam
I’m a teacher, researcher and writer. I write about study subjects to improve the learning of college and university students. I write top Quality study notes Mostly, Tech, Games, Education, And Solutions/Tips and Tricks. I am a person who helps students to acquire knowledge, competence or virtue.

Leave a Comment