Correct World of Tanks settings are the key to efficiency! How to configure games correctly - disable “useless” graphics settings

Technologies for displaying 3D objects on the screen of personal computer monitors are developing along with the release of modern graphics adapters. Obtaining an ideal picture in three-dimensional applications, as close as possible to real video, is the main task of hardware developers and the main goal for connoisseurs of computer games. The technology implemented in the latest generation of video cards is designed to help with this - anisotropic filtering in games.

What it is?

Every computer player wants a colorful picture of the virtual world to unfold on the screen, so that, having climbed to the top of a mountain, one can survey the picturesque surroundings, so that, pressing the acceleration button on the keyboard to the fullest, one can see not only the straight line of a racing track to the horizon, and also a full-fledged environment in the form of city landscapes. Objects displayed on a monitor screen ideally stand directly in front of the user at the most convenient scale; in fact, the vast majority of three-dimensional objects are at an angle to the line of sight. Moreover, different virtual distances of textures to the point of view also make adjustments to the size of the object and its textures. Calculations for displaying a three-dimensional world on a two-dimensional screen are used in various 3D technologies designed to improve visual perception, not least of which is texture filtering (anisotropic or trilinear). Filtration of this type is one of the best developments in this area.

On fingers

To understand what anisotropic filtering does, you need to understand the basic principles of texturing algorithms. All objects of the three-dimensional world consist of a “frame” (a three-dimensional three-dimensional model of an object) and a surface (texture) - a two-dimensional picture “stretched” over the frame. The smallest part of the texture is a colored texel, these are like pixels on the screen, depending on the “density” of the texture, texels can be of different sizes. Multi-colored texels make up a complete picture of any object in the three-dimensional world.

On the screen, texels are contrasted with pixels, the number of which is limited by the available resolution. While there can be an almost infinite number of texels in the virtual visibility zone, the pixels that display the image to the user have a fixed number. So, the transformation of visible texels into color pixels is carried out by an algorithm for processing three-dimensional models - filtering (anisotropic, bilinear or trilinear). More details about all types are given below in order, as they come from one another.

Middle color

The simplest filtering algorithm is to display the color closest to the point of view of each pixel (Point Sampling). It's simple: the line of sight of a certain point on the screen falls on the surface of a three-dimensional object, and the texture of the images returns the color of the texel closest to the point of incidence, filtering out all others. Ideal for plain colored surfaces. With small differences in color, it also gives a quite high-quality picture, but rather dull, because where have you seen three-dimensional objects of the same color? Shaders of lighting, shadows, reflections and others alone are ready to color any object in games like a Christmas tree, what can we say about the textures themselves, which sometimes represent works of fine art. Even a gray soulless concrete wall in modern games is not just a rectangle of a nondescript color, it is a surface dotted with roughness, sometimes cracks and scratches and other artistic elements, bringing the appearance of a virtual wall as close as possible to real walls or walls imagined by the developers’ imagination. In general, near color could be used in the first three-dimensional games, but now players have become much more demanding of graphics. What is important: near color filtering requires virtually no calculations, that is, it is very economical in terms of computer resources.

Linear filtering

The differences between the linear algorithm are not too significant; instead of the nearest texel point, linear filtering uses 4 at once and calculates the average color between them. The only problem is that on surfaces located at an angle to the screen, the line of sight forms an ellipse on the texture, while linear filtering uses a perfect circle to select the nearest texels, regardless of the viewing angle. Using four texels instead of one can significantly improve the rendering of textures distant from the viewpoint, but it is still not enough to correctly reflect the image.

Mip-mapping

This technology allows you to slightly optimize the rendering of computer graphics. For each texture, a certain number of copies are created with different degrees of detail; for each level of detail, a different picture is selected, for example, for a long corridor or large hall, the near floors and walls require the greatest possible detail, while the far corners cover only a few pixels and do not require significant detail. This 3D graphics feature helps avoid blurring of distant textures, as well as distortion and loss of the image, and works together with filtering, because the video adapter, when calculating filtering, is not able to decide which texels are important for the completeness of the picture, and which are not so much.

Bilinear filtering

Using linear filtering and MIP texturing together, we get a bilinear algorithm that allows us to display distant objects and surfaces even better. However, the same 4 texels do not provide the technology with sufficient flexibility; moreover, bilinear filtering does not mask transitions to the next scaling level, working with each part of the texture separately, and their boundaries can be visible. Thus, at a great distance or at a large angle, the textures are greatly blurred, making the picture unnatural, as if for people with myopia, plus for textures with complex patterns, the junction lines of textures of different resolutions are noticeable. But we are behind a monitor screen, we don’t need myopia and various strange lines!

Trilinear filtering

This technology is designed to correct the drawing on texture scale change lines. While the bilinear algorithm works with each level of mip-mapping separately, trilinear filtering additionally calculates the boundaries of the levels of detail. With all this, the requirements for RAM are increasing, and the improvement in the image on distant objects is not very noticeable. Of course, the boundaries between nearby zoom levels receive better processing than with bilinear, and look more harmonious without sharp transitions, which affects the overall impression.

Anisotropic filtering

If you calculate the projection of the line of sight of each screen pixel on the texture according to the viewing angle, you will get incorrect shapes - trapezoids. Coupled with using more texels to calculate the final color, this can give a much better result. What does anisotropic filtering do? Considering that in theory there are no limits to the number of texels used, such an algorithm is capable of displaying computer graphics of unlimited quality at any distance from the viewpoint and at any angle, ideally comparable to real video. Anisotropic filtering in its capabilities is limited only by the technical characteristics of personal computer graphics adapters, which are what modern video games are designed for.

Suitable video cards

Anisotropic filtering mode has been possible on custom video adapters since 1999, starting with the famous Riva TNT and Voodoo cards. The top configurations of these cards were quite capable of rendering trilinear graphics and even produced decent FPS figures using x2 anisotropic filtering. The last number indicates the quality of filtering, which, in turn, depends on the number of texels involved in calculating the final color of a pixel on the screen; in this case, as many as 8 are used. Plus, the calculations use the capture area of ​​these texels corresponding to the viewing angle, and not a circle, as in linear algorithms earlier. Modern video cards are capable of processing filtering with an anisotropic algorithm at the x16 level, which means using 128 texels to calculate the final pixel color. This promises a significant improvement in the display of textures distant from the viewpoint, as well as a serious load, but the latest generation of graphics adapters are equipped with enough RAM and multi-core processors to cope with this task.

Impact on FPS

The benefits are clear, but how much will anisotropic filtering cost players? The impact on the performance of gaming video adapters with serious hardware, released no later than 2010, is very insignificant, which is confirmed by tests by independent experts in a number of popular games. Anisotropic texture filtering in x16 quality on budget cards shows a decrease in overall FPS by 5-10%, and that is due to less powerful graphics adapter components. Such loyalty of modern hardware to resource-intensive computing speaks of the constant concern of manufacturers for us, humble gamers. It is quite possible that the transition to the next levels of anisotropy quality is not far off, as long as the game makers don’t let us down.

Of course, anisotropic filtering alone is involved in improving picture quality. It is up to the player to decide whether to enable it or not, but happy owners of the latest models from Nvidia or AMD (ATI) should not even think about this issue - setting anisotropic filtering to the maximum level will not affect performance and will add realism to landscapes and vast locations. The situation is a little more complicated for owners of integrated graphics solutions from Intel, since in this case much depends on the quality of the computer’s RAM, its clock frequency and capacity.

Options and optimization

Control of the type and quality of filtration is available thanks to special software that regulates graphics adapter drivers. Also, advanced anisotropic filtering settings are available in the game menus. The implementation of high resolutions and the use of multiple monitors in games forced manufacturers to think about accelerating the performance of their products, including through the optimization of anisotropic algorithms. Card manufacturers have introduced a new technology called adaptive anisotropic filtering in the latest driver versions. What does it mean? This feature, introduced by AMD and partially implemented in recent Nvidia products, allows the filtering factor to be reduced where possible. Thus, anisotropic filtering with a x2 coefficient can process nearby textures, while distant objects will be rendered using more complex algorithms up to a maximum x16 coefficient. As usual, optimization provides a significant improvement at the expense of quality; in some places, the adaptive technology is prone to errors, noticeable on the ultra settings of some recent 3D video games.

What does anisotropic filtering do? The use of the computing power of video adapters, compared to other filtering technologies, is much higher, which affects performance. However, the problem of performance when using this algorithm has long been solved in modern graphics chips. Together with other 3D technologies, anisotropic filtering in games (we already imagine what it is) affects the overall impression of the integrity of the picture, especially when displaying distant objects and textures located at an angle to the screen. This is obviously the main thing players need.

A look into the future

Modern hardware with average characteristics and above is quite capable of coping with the demands of players, so the word on the quality of three-dimensional computer worlds is now up to video game developers. The latest generation graphics adapters support not only high resolutions and resource-intensive image processing technologies such as anisotropic texture filtering, but also VR technologies or support for multiple monitors.

One of the leading branches of the computer industry is the “production” of games. Special motherboards, video cards, and chipsets are developed specifically for games. But, as everyone knows, the main requirement of any modern game is a powerful video card. True, the video card is also the most expensive part of a personal computer. Therefore, not everyone can afford to change the video card like gloves, which means the question is: how to adjust graphics in games– always relevant!

Often finances do not allow you to purchase a good video card. Because of this, a number of problems arise, for example, braking, “laddering” at the boundaries of objects, or poor detailing. However, a “weak” video card is not a death sentence. The situation can be saved by adjusting the graphics. Even if you don’t have the most powerful computer, the “picture” may be quite normal. Naturally, something will have to be sacrificed, but this is better than a ruined game. And further we will deal with the most common graphics settings.

If you've already tried to understand the game's settings, you probably understood at most half of what was written. Let's say anisotropic filtering. Anisotropic filtering is very good when generating objects that are strongly inclined relative to the camera. It leaves the texture equally crisp rather than partially blurred. Let's do without abstruse words, I will explain everything simply and clearly. When the texture of a design is displayed on the screen, it appears either reduced or enlarged. This is what anisotropic filtering does. In other words, it removes “extra” pixels or, conversely, inserts additional ones if necessary. This type of filtering also does not generate most artifacts.

Anisotropic filtering has only one setting - the filter coefficient. Possible values ​​for this coefficient can be: 2x, 4x, 8x and 16x. Textures look sharper and more natural at higher values. To get a normal picture, 4x or 8x is enough. Even if you set it to 8x or 16x, this will not particularly affect the performance.

I'm sure you noticed a very strange word in the settings - shaders. They manipulate the 3D scene. For example, they add post-processing, apply texture, change lighting. In short, shaders create new effects. In parallel mode, shaders work most productively.

Parallax mapping simulates the relief of textures. It doesn't create anything, it just manipulates textures. For example, your character can “put” his foot into a stone.
The effect works well only when the height of the object changes smoothly, otherwise the picture will have flaws. Parallax mapping significantly saves computer computing resources.

Tessellation is another graphics aid in games. Unlike Parallax mapping, which only creates the illusion of an object being three-dimensional, tessellation actually increases the detail of simple 3D objects. In addition, tessellation can be applied to any objects.

Another difference from Parallax mapping is that tessellation significantly loads the computer and only works with DirectX 11.

Now about the effect that removes the ladder at the edges of objects - anti-aliasing. There are several types of anti-aliasing of varying efficiency and severity for a video card: FSAA, MSAA, CSAA, SSAA. CSAA is already obsolete. MSAA and SSAA are almost identical in operating principle. MSAA only smoothes the edges of objects. This saves video card resources. FSAA smoothes everything out perfectly, but the frames per second will be very low.

Anti-aliasing, like filtering, has one parameter - the smoothing coefficient (2x, 4x, 8x, 16x, 32x). Previously, anti-aliasing had a significant impact on the number of frames, but now this effect has virtually no effect.

The V-Sync (vertical synchronization) option is used to synchronize game frames with the vertical scan frequency of the monitor. That is, the game frame is displayed on the monitor while the image is updated on it. It is important that in game fps cannot exceed the vertical scan frequency of the monitor. Otherwise, you will have to activate triple buffering. Vertical sync avoids frame shift effect.

The HIGH DYNAMIC RANGE (HDR) effect is often used in scenes with contrasting lighting. Without this effect, in scenes with contrasting lighting, everything becomes monotonous and loses detail. Preliminary calculations are carried out with increased accuracy: 64 or 96 bits. Only when displayed on the screen, the image is adjusted to 24 bits. This effect is often used to create the illusion of vision adaptation when the hero exits a tunnel onto a brightly lit surface.

MOTION BLUR – blurring effect when moving the camera quickly. It adds a cinematic feel to what's happening on screen. Often used in racing games to add dynamics.

In the settings you can also find a technique such as SSAO (Screen Space Ambient Occlusion). This technique is used to make the scene look photorealistic. It is built on the principle of creating more realistic scene lighting, taking into account the characteristics of reflection and absorption of light. Its predecessor, Ambient occlusion, has not found application on modern GPUs due to their high level of performance. It is clear that SSAO gives a weaker result, but it is quite sufficient. In general, SSAO is the golden mean between picture quality and performance.

I think many people have come across such an option as BLOOM in shooters. It simulates the effect of shooting bright scenes with conventional cameras, when the bright light behind the objects “floods” the objects in front of it. This effect can create artifacts on the edges of objects.
Sometimes games rely on the CEL SHADING effect. In it, each frame is brought almost to a hand-drawn drawing or a fragment from a cartoon. Roughly speaking, these are just coloring comics. Games in this style began to be released in 2000.

Another effect is FILM GRAIN: graininess. The artifact occurs in analog TV with a poor signal, photographs (taken in low light) or on old magnetic cassettes. Usually this effect only gets in the way, but in some games (horror films, for example, Silent Hill) it only adds atmosphere.

Shooters use another effect that adds the illusion of presence. This is DEPTH OF FIELD (depth of field). DEPTH OF FIELD - this is the camera focusing on the distant or near ground. For example, the foreground is in focus, which means the background is blurry and vice versa. You can see the effect of depth of field in photographs taken with a high-quality camera.

You've just become familiar with all the common game graphics effects and now you can customize the graphics quality in any game. But do not forget that turning them all on to maximum will lead to a strong decrease in the number of frames per second, that is, slowdown of the picture. So set up wisely. Well, by the way, I wish you a fun and entertaining game

By the way, I recommend this guide for cleaning your PC from junk and accordingly speeding up its operation. Link to guide: http://pcguide.biz/del-trash.html

Graphics in games - video analysis:

Modern games use more and more graphic effects and technologies that improve the picture. However, developers usually don’t bother explaining what exactly they are doing. When you don't have the most powerful computer, you have to sacrifice some of the capabilities. Let's try to look at what the most common graphics options mean to better understand how to free up PC resources with minimal impact on graphics.

Anisotropic filtering
When any texture is displayed on the monitor not in its original size, it is necessary to insert additional pixels into it or, conversely, remove the extra ones. To do this, a technique called filtering is used.


trilinear

anisotropic

Bilinear filtering is the simplest algorithm and requires less computing power, but also produces the worst results. Trilinear adds clarity, but still generates artifacts. Anisotropic filtering is considered the most advanced method for eliminating noticeable distortions on objects that are strongly inclined relative to the camera. Unlike the two previous methods, it successfully combats the gradation effect (when some parts of the texture are blurred more than others, and the boundary between them becomes clearly visible). When using bilinear or trilinear filtering, the texture becomes more and more blurry as the distance increases, but anisotropic filtering does not have this drawback.

Given the amount of data being processed (and there may be many high-resolution 32-bit textures in the scene), anisotropic filtering is especially demanding on memory bandwidth. Traffic can be reduced primarily through texture compression, which is now used everywhere. Previously, when it was not practiced so often, and the throughput of video memory was much lower, anisotropic filtering significantly reduced the number of frames. On modern video cards, it has almost no effect on fps.

Anisotropic filtering has only one filter factor setting (2x, 4x, 8x, 16x). The higher it is, the clearer and more natural the textures look. Typically, with a high value, small artifacts are visible only on the outermost pixels of tilted textures. Values ​​of 4x and 8x are usually quite enough to get rid of the lion's share of visual distortion. Interestingly, when moving from 8x to 16x, the performance penalty will be quite small even in theory, since additional processing will only be needed for a small number of previously unfiltered pixels.

Shaders
Shaders are small programs that can perform certain manipulations with a 3D scene, for example, changing lighting, applying texture, adding post-processing and other effects.

Shaders are divided into three types: vertex shaders operate with coordinates, geometry shaders can process not only individual vertices, but also entire geometric shapes consisting of a maximum of 6 vertices, pixel shaders work with individual pixels and their parameters .

Shaders are mainly used to create new effects. Without them, the set of operations that developers could use in games is very limited. In other words, adding shaders made it possible to obtain new effects that were not included in the video card by default.

Shaders work very productively in parallel mode, and that is why modern graphics adapters have so many stream processors, which are also called shaders.

Parallax mapping
Parallax mapping is a modified version of the famous bumpmapping technique, used to add relief to textures. Parallax mapping does not create 3D objects in the usual sense of the word. For example, a floor or wall in a game scene will appear rough while actually being completely flat. The relief effect here is achieved only through manipulation of textures.

The source object does not have to be flat. The method works on various game objects, but its use is desirable only in cases where the height of the surface changes smoothly. Sudden changes are processed incorrectly and artifacts appear on the object.

Parallax mapping significantly saves computer computing resources, since when using analogue objects with an equally detailed 3D structure, the performance of video adapters would not be enough to render scenes in real time.

The effect is most often used on stone pavements, walls, bricks and tiles.

Anti-Aliasing
Before DirectX 8, anti-aliasing in games was done using SuperSampling Anti-Aliasing (SSAA), also known as Full-Scene Anti-Aliasing (FSAA). Its use led to a significant decrease in performance, so with the release of DX8 it was immediately abandoned and replaced with Multisample Anti-Aliasing (MSAA). Despite the fact that this method gave worse results, it was much more productive than its predecessor. Since then, more advanced algorithms have appeared, such as CSAA.

AA off AA on

Considering that over the past few years the performance of video cards has noticeably increased, both AMD and NVIDIA have again returned support for SSAA technology to their accelerators. However, it will not be possible to use it even now in modern games, since the number of frames/s will be very low. SSAA will be effective only in projects from previous years, or in current ones, but with modest settings for other graphic parameters. AMD has implemented SSAA support only for DX9 games, but in NVIDIA SSAA also functions in DX10 and DX11 modes.

The principle of smoothing is very simple. Before the frame is displayed on the screen, certain information is calculated not in its native resolution, but in an enlarged one and a multiple of two. Then the result is reduced to the required size, and then the “ladder” along the edges of the object becomes less noticeable. The higher the original image and the smoothing factor (2x, 4x, 8x, 16x, 32x), the less jaggies there will be on the models. MSAA, unlike FSAA, smoothes only the edges of objects, which significantly saves video card resources, however, this technique can leave artifacts inside polygons.

Previously, Anti-Aliasing always significantly reduced fps in games, but now it affects the number of frames only slightly, and sometimes has no effect at all.

Tessellation
Using tessellation in a computer model, the number of polygons increases by an arbitrary number of times. To do this, each polygon is divided into several new ones, which are located approximately the same as the original surface. This method allows you to easily increase the detail of simple 3D objects. At the same time, however, the load on the computer will also increase, and in some cases small artifacts cannot be ruled out.

At first glance, tessellation can be confused with Parallax mapping. Although these are completely different effects, since tessellation actually changes the geometric shape of an object, and does not just simulate relief. In addition, it can be used for almost any object, while the use of Parallax mapping is very limited.

Tessellation technology has been known in cinema since the 80s, but it began to be supported in games only recently, or rather after graphics accelerators finally reached the required level of performance at which it can be performed in real time.

For the game to use tessellation, it requires a video card that supports DirectX 11.

Vertical Sync

V-Sync is the synchronization of game frames with the vertical scan frequency of the monitor. Its essence lies in the fact that a fully calculated game frame is displayed on the screen at the moment the image is updated on it. It is important that the next frame (if it is already ready) will also appear no later and no earlier than the output of the previous one ends and the next one begins.

If the monitor refresh rate is 60 Hz, and the video card has time to render the 3D scene with at least the same number of frames, then each monitor refresh will display a new frame. In other words, at an interval of 16.66 ms, the user will see a complete update of the game scene on the screen.

It should be understood that when vertical synchronization is enabled, the fps in the game cannot exceed the vertical scan frequency of the monitor. If the number of frames is lower than this value (in our case, less than 60 Hz), then in order to avoid performance losses it is necessary to activate triple buffering, in which frames are calculated in advance and stored in three separate buffers, which allows them to be sent to the screen more often.

The main task of vertical synchronization is to eliminate the effect of a shifted frame, which occurs when the lower part of the display is filled with one frame, and the upper part is filled with another, shifted relative to the previous one.

Post-processing
This is the general name for all the effects that are superimposed on a ready-made frame of a fully rendered 3D scene (in other words, on a two-dimensional image) to improve the quality of the final picture. Post-processing uses pixel shaders and is used in cases where additional effects require complete information about the entire scene. Such techniques cannot be applied in isolation to individual 3D objects without causing artifacts to appear in the frame.

High dynamic range (HDR)
An effect often used in game scenes with contrasting lighting. If one area of ​​the screen is very bright and another is very dark, a lot of the detail in each area is lost and they look monotonous. HDR adds more gradation to the frame and allows for more detail in the scene. To use it, you usually have to work with a wider range of colors than standard 24-bit precision can provide. Preliminary calculations occur in high precision (64 or 96 bits), and only at the final stage the image is adjusted to 24 bits.

HDR is often used to realize the effect of vision adaptation when a hero in games emerges from a dark tunnel onto a well-lit surface.

Bloom
Bloom is often used in conjunction with HDR, and it also has a fairly close relative, Glow, which is why these three techniques are often confused.

Bloom simulates the effect that can be seen when shooting very bright scenes with conventional cameras. In the resulting image, the intense light appears to take up more volume than it should and to “climb” onto objects even though it is behind them. When using Bloom, additional artifacts in the form of colored lines may appear on the borders of objects.

Film Grain
Grain is an artifact that occurs on analog TV with a poor signal, on old magnetic videotapes or photographs (particularly digital images taken in low light). Players often disable this effect because it somewhat spoils the picture rather than improves it. To understand this, you can run Mass Effect in each mode. In some horror films, such as Silent Hill, noise on the screen, on the contrary, adds atmosphere.

Motion Blur
Motion Blur is the effect of blurring the image when the camera moves quickly. It can be successfully used when the scene needs to be given more dynamics and speed, therefore it is especially in demand in racing games. In shooters, the use of blur is not always perceived unambiguously. Proper use of Motion Blur can add a cinematic feel to what's happening on screen.

The effect will also help, if necessary, to disguise the low frame rate and add smoothness to the gameplay.

SSAO
Ambient occlusion is a technique used to make a scene photorealistic by creating more believable lighting of the objects in it, which takes into account the presence of other objects nearby with their own characteristics of light absorption and reflection.

Screen Space Ambient Occlusion is a modified version of Ambient Occlusion and also simulates indirect lighting and shading. The appearance of SSAO was due to the fact that, at the current level of GPU performance, Ambient Occlusion could not be used to render scenes in real time. The increased performance in SSAO comes at the cost of lower quality, but even this is enough to improve the realism of the picture.

SSAO works according to a simplified scheme, but it has many advantages: the method does not depend on the complexity of the scene, does not use RAM, can function in dynamic scenes, does not require frame pre-processing and loads only the graphics adapter without consuming CPU resources.

Cel shading
Games with the Cel shading effect began to be made in 2000, and first of all they appeared on consoles. On PCs, this technique became truly popular only a couple of years later. With the help of Cel shading, each frame practically turns into a hand-drawn drawing or a fragment from a cartoon.

Comics are created in a similar style, so the technique is often used in games related to them. Among the latest well-known releases is the shooter Borderlands, where Cel shading is visible to the naked eye.

Features of the technology are the use of a limited set of colors, as well as the absence of smooth gradients. The name of the effect comes from the word Cel (Celluloid), i.e. the transparent material (film) on which animated films are drawn.

Depth of field
Depth of field is the distance between the near and far edges of space, within which all objects will be in focus, while the rest of the scene will be blurred.

To a certain extent, depth of field can be observed simply by focusing on an object close in front of your eyes. Anything behind it will be blurred. The opposite is also true: if you focus on distant objects, everything in front of them will turn out blurry.

You can see the effect of depth of field in an exaggerated form in some photographs. This is the degree of blur that is often attempted to be simulated in 3D scenes.

In games using Depth of field, the gamer usually feels a stronger sense of presence. For example, when looking somewhere through the grass or bushes, he sees only small fragments of the scene in focus, which creates the illusion of presence.

Performance Impact

To find out how enabling certain options affects performance, we used the gaming benchmark Heaven DX11 Benchmark 2.5. All tests were carried out on an Intel Core2 Duo e6300, GeForce GTX460 system at a resolution of 1280×800 pixels (with the exception of vertical synchronization, where the resolution was 1680×1050).

As already mentioned, anisotropic filtering has virtually no effect on the number of frames. The difference between anisotropy disabled and 16x is only 2 frames, so we always recommend setting it to maximum.

Anti-aliasing in Heaven Benchmark reduced fps more significantly than we expected, especially in the heaviest 8x mode. However, since 2x is enough to noticeably improve the picture, we recommend choosing this option if playing at higher levels is uncomfortable.

Tessellation, unlike the previous parameters, can take on an arbitrary value in each individual game. In Heaven Benchmark, the picture without it deteriorates significantly, and at the maximum level, on the contrary, it becomes a little unrealistic. Therefore, intermediate values ​​should be set to moderate or normal.

A higher resolution was chosen for vertical sync so that fps is not limited by the vertical refresh rate of the screen. As expected, the number of frames throughout almost the entire test with synchronization turned on remained firmly at around 20 or 30 fps. This is due to the fact that they are displayed simultaneously with the screen refresh, and with a scanning frequency of 60 Hz this can be done not with every pulse, but only with every second (60/2 = 30 frames/s) or third (60/3 = 20 frames/s). When V-Sync was turned off, the number of frames increased, but characteristic artifacts appeared on the screen. Triple buffering did not have any positive effect on the smoothness of the scene. This may be due to the fact that there is no option in the video card driver settings to force buffering to be disabled, and normal deactivation is ignored by the benchmark, and it still uses this function.

If Heaven Benchmark were a game, then at maximum settings (1280×800; AA 8x; AF 16x; Tessellation Extreme) it would be uncomfortable to play, since 24 frames is clearly not enough for this. With minimal quality loss (1280×800; AA 2x; AF 16x, Tessellation Normal) you can achieve a more acceptable 45 fps.



But really, how to set up games correctly? What graphics settings should we disable because they are useless? I came to this post at a time when my work laptop wouldn’t launch another game, forcing me to delve into the depths of strange options and switches. Even if your computer is powerful enough, why not increase the frame rate by sacrificing truly useless settings!!! What I write about is a subjective opinion, which is based on my organoleptic indicators and only you can choose your path!!! Let's get started.

Dynamic reflections and a story about brakes in Overwatch

This story is based on personal experience. We will ignore the fact that a busy “business guy” still decided to install this time killer from Blizzard, but the fact remains a fact. Of course, my weak computer would hardly be able to run the game on high settings; even on medium it produced a measly 26 FPS. It took me a long time in the graphics menu before a light click turned off "Dynamic Reflections". You will be surprised, but my FPS immediately jumped by ~30%, and I was able to play with b O greater comfort than before.

Dynamic reflections allow you to see shadow and light reflections on surfaces that you would mostly not notice. Especially if it's a fast-paced game!

SSAA – Supersampling


This setting allows you to render high resolution footage on your lower resolution monitor. We won’t go into technology, but as my personal experience has shown, turning on SSAA does not greatly affect the overall impressions of the game. Yes, of course, the picture seems a little more natural than with just AA, but this option eats up a huge amount of resources. By the way, a huge number of console games save system resources precisely on this indicator. Ask console players:) Are they very upset?

Disabling Motion Blur is better than any blessing

To be honest, even at gunpoint I don’t understand those people who like Motion Blur. It irritates, blurs the picture, makes it less realistic! And at the same time, the bastard is eating resources!!!


I turn off this setting even in racing, because it has little effect on the realism of what is happening.

Shadows wander near the house - various fairy-tale animals


In my entire life, I have only found one person who came up to me and actually (!) noticed that on my work PC the shadows looked a little pixelated. It so happened that I developed this habit since childhood. Then I first encountered the game on a low-power computer, and my research revealed that disabling shadows consistently added up to 10 FPS. All kinds - “deep shadows”, “very deep” and so on - have little effect on the overall feeling of the graphics, but they load the system the most. The human brain is designed in such a way that it does not particularly notice such unimportant things - primarily focusing on the extravaganza of special effects and the character in the center of the screen.

Depth of field


This parameter is the distance between the near and far boundaries of space, within which all objects will be in focus, and the rest will appear blurry. This allows the gamer to feel a little more presence, for example, when looking out of the grass. But in fact, this effect only affects the frame rate, and according to a survey among my friends, it rarely draws even a little attention to itself.

Believe me, you can easily turn off everything above and you are unlikely to lose much. And I, as a person who myself suffers from troubles with a weak working PC, will gladly accept other advice in the comments!!

Modern games use more and more graphic effects and technologies that improve the picture. However, developers usually don’t bother explaining what exactly they are doing. When you don't have the most powerful computer, you have to sacrifice some of the capabilities. Let's try to look at what the most common graphics options mean to better understand how to free up PC resources with minimal impact on graphics.

Anisotropic filtering

When any texture is displayed on the monitor not in its original size, it is necessary to insert additional pixels into it or, conversely, remove the extra ones. To do this, a technique called filtering is used.

Trileneynaya

Anisotropic

Bilinear filtering is the simplest algorithm and requires less computing power, but also produces the worst results. Trilinear adds clarity, but still generates artifacts. Anisotropic filtering is considered the most advanced method for eliminating noticeable distortions on objects that are strongly inclined relative to the camera. Unlike the two previous methods, it successfully combats the gradation effect (when some parts of the texture are blurred more than others, and the boundary between them becomes clearly visible). When using bilinear or trilinear filtering, the texture becomes more and more blurry as the distance increases, but anisotropic filtering does not have this drawback.

Given the amount of data being processed (and there may be many high-resolution 32-bit textures in the scene), anisotropic filtering is especially demanding on memory bandwidth. Traffic can be reduced primarily through texture compression, which is now used everywhere. Previously, when it was not practiced so often, and the throughput of video memory was much lower, anisotropic filtering significantly reduced the number of frames. On modern video cards, it has almost no effect on fps.

Anisotropic filtering has only one setting - filter factor (2x, 4x, 8x, 16x). The higher it is, the clearer and more natural the textures look. Typically, with a high value, small artifacts are visible only on the outermost pixels of tilted textures. Values ​​of 4x and 8x are usually quite enough to get rid of the lion's share of visual distortion. Interestingly, when moving from 8x to 16x, the performance penalty will be quite small even in theory, since additional processing will only be needed for a small number of previously unfiltered pixels.

Shaders

Shaders are small programs that can perform certain manipulations with a 3D scene, for example, changing lighting, applying texture, adding post-processing and other effects.

Shaders are divided into three types: vertex shaders operate with coordinates, geometry shaders can process not only individual vertices, but also entire geometric shapes consisting of a maximum of 6 vertices, pixel shaders work with individual pixels and their parameters .

Shaders are mainly used to create new effects. Without them, the set of operations that developers could use in games is very limited. In other words, adding shaders made it possible to obtain new effects that were not included in the video card by default.

Shaders work very productively in parallel mode, and that is why modern graphics adapters have so many stream processors, which are also called shaders. For example, the GeForce GTX 580 has as many as 512 of them.

Parallax mapping

Parallax mapping is a modified version of the well-known bumpmapping technique, used to add relief to textures. Parallax mapping does not create 3D objects in the usual sense of the word. For example, a floor or wall in a game scene will appear rough while actually being completely flat. The relief effect here is achieved only through manipulation of textures.

The source object does not have to be flat. The method works on various game objects, but its use is desirable only in cases where the height of the surface changes smoothly. Sudden changes are processed incorrectly and artifacts appear on the object.

Parallax mapping significantly saves computer computing resources, since when using analogue objects with an equally detailed 3D structure, the performance of video adapters would not be enough to render scenes in real time.

The effect is most often used on stone pavements, walls, bricks and tiles.

Anti-Aliasing

Before DirectX 8, anti-aliasing in games was done using SuperSampling Anti-Aliasing (SSAA), also known as Full-Scene Anti-Aliasing (FSAA). Its use led to a significant decrease in performance, so with the release of DX8 it was immediately abandoned and replaced with Multisample Anti-Aliasing (MSAA). Despite the fact that this method gave worse results, it was much more productive than its predecessor. Since then, more advanced algorithms have appeared, such as CSAA.

AA off

AA included

Considering that over the past few years the performance of video cards has noticeably increased, both AMD and NVIDIA have again returned support for SSAA technology to their accelerators. However, it will not be possible to use it even now in modern games, since the number of frames/s will be very low. SSAA will be effective only in projects from previous years, or in current ones, but with modest settings for other graphic parameters. AMD has implemented SSAA support only for DX9 games, but in NVIDIA SSAA also functions in DX10 and DX11 modes.

The principle of smoothing is very simple. Before the frame is displayed on the screen, certain information is calculated not in its native resolution, but in an enlarged one and a multiple of two. Then the result is reduced to the required size, and then the “ladder” along the edges of the object becomes less noticeable. The higher the original image and the smoothing factor (2x, 4x, 8x, 16x, 32x), the less jaggies there will be on the models. MSAA, unlike FSAA, smoothes only the edges of objects, which significantly saves video card resources, however, this technique can leave artifacts inside polygons.

Previously, Anti-Aliasing always significantly reduced fps in games, but now it affects the number of frames only slightly, and sometimes has no effect at all.

Tessellation

Using tessellation in a computer model, the number of polygons increases by an arbitrary number of times. To do this, each polygon is divided into several new ones, which are located approximately the same as the original surface. This method allows you to easily increase the detail of simple 3D objects. At the same time, however, the load on the computer will also increase, and in some cases small artifacts cannot be ruled out.

Off

Enabled

At first glance, tessellation can be confused with Parallax mapping. Although these are completely different effects, since tessellation actually changes the geometric shape of an object, and does not just simulate relief. In addition, it can be used for almost any object, while the use of Parallax mapping is very limited.

Tessellation technology has been known in cinema since the 80s, but it began to be supported in games only recently, or rather after graphics accelerators finally reached the required level of performance at which it can be performed in real time.

For the game to use tessellation, it requires a video card that supports DirectX 11.

Vertical Sync

V-Sync is the synchronization of game frames with the vertical scan frequency of the monitor. Its essence lies in the fact that a fully calculated game frame is displayed on the screen at the moment the image is updated on it. It is important that the next frame (if it is already ready) will also appear no later and no earlier than the output of the previous one ends and the next one begins.

If the monitor refresh rate is 60 Hz, and the video card has time to render the 3D scene with at least the same number of frames, then each monitor refresh will display a new frame. In other words, at an interval of 16.66 ms, the user will see a complete update of the game scene on the screen.

It should be understood that when vertical synchronization is enabled, the fps in the game cannot exceed the vertical scan frequency of the monitor. If the number of frames is lower than this value (in our case, less than 60 Hz), then in order to avoid performance losses it is necessary to activate triple buffering, in which frames are calculated in advance and stored in three separate buffers, which allows them to be sent to the screen more often.

The main task of vertical sync is to eliminate the effect of a shifted frame, which occurs when the lower part of the display is filled with one frame, and the upper part with another, shifted relative to the previous one.

Post-processing

This is the general name for all the effects that are superimposed on a ready-made frame of a fully rendered 3D scene (in other words, on a two-dimensional image) to improve the quality of the final picture. Post-processing uses pixel shaders and is used in cases where additional effects require complete information about the entire scene. Such techniques cannot be applied in isolation to individual 3D objects without causing artifacts to appear in the frame.

High dynamic range (HDR)

An effect often used in game scenes with contrasting lighting. If one area of ​​the screen is very bright and another is very dark, a lot of the detail in each area is lost and they look monotonous. HDR adds more gradation to the frame and allows for more detail in the scene. To use it, you usually have to work with a wider range of colors than standard 24-bit precision can provide. Preliminary calculations occur in high precision (64 or 96 bits), and only at the final stage the image is adjusted to 24 bits.

HDR is often used to realize the effect of vision adaptation when a hero in games emerges from a dark tunnel onto a well-lit surface.

Bloom

Bloom is often used in conjunction with HDR, and it also has a fairly close relative - Glow, which is why these three techniques are often confused

.

Bloom simulates the effect that can be seen when shooting very bright scenes with conventional cameras. In the resulting image, the intense light appears to take up more volume than it should and to “climb” onto objects even though it is behind them. When using Bloom, additional artifacts in the form of colored lines may appear on the borders of objects.

Film Grain

Grain is an artifact that occurs in analog TV with a poor signal, on old magnetic videotapes or photographs (in particular, digital images taken in low light). Players often disable this effect because it somewhat spoils the picture rather than improves it. To understand this, you can run Mass Effect in each mode. In some horror films, such as Silent Hill, noise on the screen, on the contrary, adds atmosphere.

Motion Blur

Motion Blur – the effect of blurring the image when the camera moves quickly. It can be successfully used when the scene needs to be given more dynamics and speed, therefore it is especially in demand in racing games. In shooters, the use of blur is not always perceived unambiguously. Proper use of Motion Blur can add a cinematic feel to what's happening on screen.

Switched off

Included

The effect will also help, if necessary, to disguise the low frame rate and add smoothness to the gameplay.

SSAO

Ambient occlusion is a technique used to make a scene photorealistic by creating more believable lighting of the objects in it, which takes into account the presence of other objects nearby with their own characteristics of light absorption and reflection.

Screen Space Ambient Occlusion is a modified version of Ambient Occlusion and also simulates indirect lighting and shading. The appearance of SSAO was due to the fact that, at the current level of GPU performance, Ambient Occlusion could not be used to render scenes in real time. The increased performance in SSAO comes at the cost of lower quality, but even this is enough to improve the realism of the picture.

SSAO works according to a simplified scheme, but it has many advantages: the method does not depend on the complexity of the scene, does not use RAM, can function in dynamic scenes, does not require frame pre-processing and loads only the graphics adapter without consuming CPU resources.

Cel shading

Games with the Cel shading effect began to be made in 2000, and first of all they appeared on consoles. On PCs, this technique became truly popular only a couple of years later, after the release of the acclaimed shooter XIII. With the help of Cel shading, each frame practically turns into a hand-drawn drawing or a fragment from a children's cartoon.

Comics are created in a similar style, so the technique is often used in games related to them. Among the latest well-known releases is the shooter Borderlands, where Cel shading is visible to the naked eye.

Features of the technology are the use of a limited set of colors, as well as the absence of smooth gradients. The name of the effect comes from the word Cel (Celluloid), i.e. the transparent material (film) on which animated films are drawn.

Depth of field

Depth of field is the distance between the near and far edges of space within which all objects will be in focus, while the rest of the scene will be blurred.

To a certain extent, depth of field can be observed simply by focusing on an object close in front of your eyes. Anything behind it will be blurred. The opposite is also true: if you focus on distant objects, everything in front of them will turn out blurry.

You can see the effect of depth of field in an exaggerated form in some photographs. This is the degree of blur that is often attempted to be simulated in 3D scenes.

In games using Depth of field, the gamer usually feels a stronger sense of presence. For example, when looking somewhere through the grass or bushes, he sees only small fragments of the scene in focus, which creates the illusion of presence.

Performance Impact

To find out how enabling certain options affects performance, we used the gaming benchmark Heaven DX11 Benchmark 2.5. All tests were carried out on an Intel Core2 Duo e6300, GeForce GTX460 system at a resolution of 1280x800 pixels (with the exception of vertical synchronization, where the resolution was 1680x1050).

As already mentioned, anisotropic filtering has virtually no effect on the number of frames. The difference between anisotropy disabled and 16x is only 2 frames, so we always recommend setting it to maximum.

Anti-aliasing in Heaven Benchmark reduced fps more significantly than we expected, especially in the heaviest 8x mode. However, since 2x is enough to noticeably improve the picture, we recommend choosing this option if playing at higher levels is uncomfortable.

Tessellation, unlike the previous parameters, can take on an arbitrary value in each individual game. In Heaven Benchmark, the picture without it deteriorates significantly, and at the maximum level, on the contrary, it becomes a little unrealistic. Therefore, you should set intermediate values ​​- moderate or normal.

A higher resolution was chosen for vertical sync so that fps is not limited by the vertical refresh rate of the screen. As expected, the number of frames throughout almost the entire test with synchronization turned on remained firmly at around 20 or 30 fps. This is due to the fact that they are displayed simultaneously with the screen refresh, and with a scanning frequency of 60 Hz this can be done not with every pulse, but only with every second (60/2 = 30 frames/s) or third (60/3 = 20 frames/s). When V-Sync was turned off, the number of frames increased, but characteristic artifacts appeared on the screen. Triple buffering did not have any positive effect on the smoothness of the scene. This may be due to the fact that there is no option in the video card driver settings to force buffering to be disabled, and normal deactivation is ignored by the benchmark, and it still uses this function.

If Heaven Benchmark were a game, then at maximum settings (1280x800; AA – 8x; AF – 16x; Tessellation Extreme) it would be uncomfortable to play, since 24 frames is clearly not enough for this. With minimal quality loss (1280×800; AA – 2x; AF – 16x, Tessellation Normal) you can achieve a more acceptable figure of 45 fps.

I hope this article will not only allow you to better optimize the game for your computer, but also expand your horizons. Very soon there will be an article about the real impact of the number of FPS on the perception of the game.



Random articles

Up