The quantity of visual memory (VRAM) on a graphics card is now a hot issue, even if you only skim the tech headlines on occasion. The newest games may require more memory than 8GB can provide, especially when played at high resolutions with the graphics quality on high or maximum. Additionally, AMD has been emphasising how much more RAM their cards have compared to Nvidia's, with the latter's most recent models receiving criticism for having just 12 GB.
So, are games really using that much memory, and if so, what precisely is it about these games' internal workings that makes it necessary? Let's go under the hood of contemporary 3D rendering to see what goes on within these graphics processors.
Games, graphics, and gigabytes
It could be worthwhile for you to quickly examine the fundamentals of how 3D rendering is normally done before we get into our examination of graphics, games, and VRAM. If you're uncertain about any of the words, feel free to go over the previous articles we wrote on this subject.
All of this may be put into perspective by saying that 3D graphics is 'only' a lot of arithmetic and data management. In the case of data, it must be kept as close as possible to the graphics processor. Cache, the name for the little quantities of high-speed memory that are integrated into all GPUs, is just big enough to hold the data needed for the computations that are now being performed.
There is simply too much data to keep it all in the cache, thus the remaining information is retained in visual memory, also known as VRAM. This is tailored for graphics demands but is identical to the normal system memory. The most costly desktop graphics cards ten years ago had 6 GB of VRAM on their circuit board, while the bulk of GPUs only had 2 or 3 GB.
The devil is in the detail
We used Microsoft's PIX, a DirectX debugging tool that can gather data and display it for analysis, to precisely quantify the amount of graphics card RAM being used in a game. We were just interested in recording RAM metrics, however one may capture a single frame and dissect it into each and every line of code sent to the GPU to see how long it takes to process it and what resources are required.
The majority of VRAM use monitoring programmes only report the amount of local GPU memory that the game and, consequently, the GPU drivers have allotted. While PIX only keeps track of three—Local Budget, Local Usage, and Local Resident—PIX does all three. The first one is how much video RAM has been allocated for a Direct3D application; this variable is continually changing as a result of the operating system and drivers.
The amount of VRAM that is consumed by so-called resident objects is measured as local resident, although local usage is the value that interests us the most. The amount of visual memory that the game is attempting to utilise is recorded here. Games must keep under the Local Budget limit to avoid a variety of issues, the most frequent of which being a brief programme halt until there is adequate budget once again.
Better lighting demands even more memory
Like many of the video games evaluated for this article, Hogwarts Legacy gives players the option to employ ray tracing to decide the final appearance of reflections, shadows, and ambient occlusion, which is a fundamental component of global lighting and ignores individual light sources.
In contrast, the bulk of the lighting in the most recent version of Cyberpunk 2077 now gives the option to employ route tracing and a unique method called spatiotemporal reservoir resampling.
It costs money to use all of these rendering tricks, and it costs money in more ways than one. To determine exactly what the rays are interacting with, ray tracing creates enormous quantities of extra data in the form of bounding volume hierarchies and even more buffers.
The VRAM loads alter significantly when using 4K and the same graphics settings as previously, but this time with every ray tracing option activated or set to its highest setting.