Today, we’d like you to understand the new graphic feature is being introduced into our games in 1.38. 1.38 update. This article is extremely technical. We requested our programmers to assist with the explanation, which is rather complicated. However, we thought that at least a few people who read our blog might be interested in this article to learn that what happens inside the game engine is an extensive amount of work and research by our programmers. Apart from the technical information, we felt that providing context and explaining the performance tradeoffs could be helpful and essential for players of all levels. Yaroslav, who is known as Cim (one among our courageous and experienced programmers who is working on improving graphics)
The reference to the TL; DR below states that Screen Space Ambient Occlusion is a new, however high-performance technique to enhance the game world rendering. It isn’t necessary to use it if you think it affects performance in a way that is not for your taste, but you may be interested and be able to afford just a few frames per second for better depth perception and shadows. The effects can be subtle, but it works at a subconscious level, but once you’re familiar with it, it’s impossible to reverse. This is yet another step in our plan to improve lighting/shading, which we’re currently implementing and will be followed by a new HDR lighting processing and the introduction of more normal surfaces in the coming updates.
The method has its limits and flaws. It’s been utilized in various AAA games over the last few years. Even if there are some flaws, the technique does help the human perception system better comprehend the scene. Ultimately, we believe that including it in our sim truck technologies will prove beneficial. We will likely add additional methods of shadow computing to make it better or even more effective.
There is constant stress to make improvements to the overall appearance of our sport, thanks to the vocal support of our supporters. In the same way, there is always a need to speed up games. Alongside these demanding requests from our users, our department of art is constantly looking for new tools for graphics to make our game more prosperous and more enjoyable. Every time we introduce a brand new feature in the game that is graphically enhanced into our game, we make sure to do it in a way that does not impact player performance. We do not want to render the game unplayable for players running older devices for our current customers. There is the option of completely disabling the SSAO feature and adjusting a few settings for its performance or quality.
Programming our programmers to work on new technology for SSAO/HBAO also required adjustments to our art and creation process. The art department reviewed every model in our 3D models used in our games, and in any cases that had artificial shadows or blackouts were already added on, the models by artists altered. For sure of the more complicated models for games, it was a reduction that decreased in terms of the triangles in the models to increase rendering speed. In a way, we swapped some of the expected manual work needed to make individual stunning 3D designs for an algorithmic rendering process that incorporates a shadow representation of the entire scene aiding in the “root” objects like lamps, buildings, and plants in the landscape. So what exactly is SSAO, and how does it function?
Before we start, we should be aware that SSAO is an abbreviation for “screen surround obstruction.” The term encompasses all various methods of environment Occlusion (AO) methods as well as their variants that work within the screen (which means that they obtain all of the information needed at the time of operation from information that is displayed on the computer’s screen as well as within the memory buffers). It is possible to use SSAO (the Crytek 2007 technology that gave the general name to all strategies), MSSAO, HBAO, HDAO, GTAO, and many more that employ different techniques each having their pros and cons. Our approach was based on a horizon-based method known as GTAO, which was announced in 2016 by Activision in a document from 2016.
The ambient obstruction (AO) component of the term refers to the fact that we determine how much of the incoming sunlight (mostly skies light though sometimes calculated occlusion can be applied to other sources of light) can be seen to overlap at a particular area in the game. Imagine you’re sitting on a level surface, and you can see the entire sky above. Therefore, the occlusion level is zero, which means that the sky entirely lights the ground. Imagine that you are at the bottom of a well. You will only be able to see a tiny part of the sky. which means the sky is 100% occluded, and it only influences the light in the well. Naturally, it’s pretty darkāthe at the bottom. A certain amount of external occlusion at the area is a factor in lighting calculations, and results in shadowed areas in the form of holes, folds, and other “difficult” places.
Calculating occlusion in high-detail and accuracy takes a significant amount of time and resources. You’ll have shooting beams at any location in all directions and verify whether they touch any sky or not, then combine the results. The more accurate information you’ll receive if you can shoot more beams, but it requires more excellent computational work. This can be done offline, as in the case of the game’s maps being saved by the creator. Specific engines and games employ this technique. However, this means you can bake only environmental occlusion data on objects because there aren’t any vehicles or animated objects at this moment.
Instead of baking static data (which will also require much time and space given the hugeness of our globe map), We want to calculate it instantaneously, in real-time. So we can compute the information to interact with cars, bridges opening, animated objects, and other such things. There is one issue, however. In this computation method, we can only use data that can be seen on the game’s screen (remember “screen space”), So when a portion of the game’s world is removed from the visible frame, it can’t be used to calculate the occlusion. This limitation can cause diverse artifacts, like the disappearance of an obstruction on the wall caused by an object that is not far from the edges of the screen, and consequently invisible to you, but also the algorithm, which is why it ceases to contribute to the calculation of occlusion.
We now have an idea of what we can estimate (ambient Occlusion), and also the data are available (what is displayed on screen). What do we have to do? For every pixel on your screen (the 2 million pixels of HD resolution multiplied four times (!) at 400 percent scaling!) Our shader code has to check the z-buffer value that is a part of its surrounding pixels to figure out the shape of the space around it. It is only possible to do only a certain number of “taps” since the cost of performance increases drastically as you increase the number of taps that we tap, which is a process that strains on the gas pedal of 3D. The limitation of the tap count can affect the precision of the external Occlusion (and in certain situations, it can cause streaking and inaccuracy). Imagine you’re trying to calculate the surroundings on a two-meter straight line, and you are willing to take 8 clicks to make it close. You want the line to be every 25 centimeters. Any aspect smaller than that will remain unnoticed until you are fortunate enough to get it (or unfortunate enough, as you might miss subsequent frames, meaning the surroundings could suddenly shift between frames and trigger the screen to flicker). The more you explore your algorithm, the greater the error it’ll be. This means you have to restrict the area you’re looking at for each game pixel, limiting the distance that the AO “sees,” which means that it’s not ideal for calculating occlusions in huge areas like bridge arches. Because you could be unable to see the next frame (and which means that the surrounding environment could suddenly alter between frames (and cause flashing). The more your algorithm tries to explore, the more precise it’ll be. Therefore, you must restrict the area that you’re studying for each game pixel which limits the distance that the AO “sees,” which means that it’s not suited for calculating occlusion in vast areas such as under bridge arches since you may be unable to see the next frame (and the surroundings could shift between frames and create flashing). The more your algorithm tries to explore, the more precise it’ll be. Therefore, you must restrict the area you’re studying for each game pixel, limiting the distance that the AO “sees,” and consequently, it’s not ideal for calculating Occlusion in large areas such as beneath bridge arches.
The selection methodology is based on horizons. It means that, as a result, we don’t study the surrounding area through shooting rays around the 3D world but instead look at the hemispheres above and around each pixel to assess the extent to which it can be opened up until it’s blocked by how much light it lets through using the z-buffer as our proxy. The hemisphere is approximated by multiple runs that follow a line that rotates around one pixel. If we can follow the entire hemisphere, there’s no obstruction. If the algorithm picks the value of the z-buffer which blocks light coming in, it will determine the degree of Occlusion. The algorithm is designed to maximize effectiveness, but the drawback is that it ceases further exploration when it strikes something, even possibly a small one. This could lead to an “excessive obstruction” issue and is considered an artifact in the visual when a small object, for instance, an old road sign, creates a significant obstruction on the wall adjacent to it. It is possible to look for tiny things and then fail to see them, resulting in “insufficient obstruction” on thin ledges. We chose to go with the first.
Another important and valuable characteristic of technology is based on horizons. Based on the region of the hemisphere that is above a pixel is occluded, it is possible to determine the direction that is least blocked. Occlusion degree may be considered in terms of an ice cone that has an apex angle of varying degrees that is oriented towards that direction. This direction is referred to as the “curved normal” and is used to calculate light-related calculations, for example, reflections that overlap shiny objects. It is believed that if we look at an area and the reflection direction from the mirror is reflected from this cone, it is considered to have (at least partially) be overlapped, which reduces the strength of reflection. The best way to understand the effect of this is to examine more rounded and more significant chrome components like diesel tanks with SSAO turned on and off.
As you will see, the process is not too difficult even for experienced programmers ;), but there is a significant amount of computation involved, which puts stress on the 3D accelerator. We, therefore, created a variety of performances profiles made using an amalgamation of optimization methods:
Making fewer clicks in the direction of the mouse can speed up the process, but it also can cause AO to miss objects larger than more accurate sampling.
Reprojecting results of AO from the previous frame allows us to conceal artifacts caused by under-sampling. However, it can also create shadows when the projector is not successful (when the image you see in between frames shifts significantly).
Half-resolution Rendering reduces the computation time to 1/4, however it results in less precise AO results – the result may be a bit blurred
We hope this information was informative and helpful to you. We’ll give you a virtual hug if you’ve read the article at this point. You’re entitled to a slice of cake and a large cup filled with hot cocoa!
We appreciate your patience and your continued support. We’ll be seeing you again in the upcoming Under the Hood articles we will be bringing from time to time for our #BestCommunityEver.
Don’t overlook that the Steam summer sale will be ending in the near future!