Ahpub - Computer Step by Step

The principle of operation of the graphics core of the processor. Central and GPU. How to Track GPU Share Usage

The source material for rendering is a set of triangles of various sizes, which make up all the objects of the virtual world: landscape, game characters, monsters, weapons, etc. However, by themselves, models created from triangles look like wire frames. Therefore, textures are superimposed on them - colored two-dimensional "wallpaper". Both textures and models are placed in the memory of the graphics card, and then, when each frame of game action is created, a rendering cycle is performed, consisting of several stages.

1. The game program sends to the GPU information describing the game scene: the composition of the objects present, their color, position relative to the viewpoint, lighting and visibility. Additional data is also transmitted that characterizes the scene and allows the video card to increase the realism of the resulting image by adding fog, blur, glare, etc.

2. The GPU arranges 3D models in the frame, determines which of the triangles included in them are visible and cuts off those hidden by other objects or, for example, by shadows.

Light sources are then created and their effect on the color of illuminated objects is determined. This stage of rendering is called "transformation and lighting" (T&L - Transformation & Lighting).

3. Visible triangles are textured using various filtering technologies. Bilinear filtering involves applying two texture versions with different resolutions to a triangle. Its use results in well-defined boundaries between areas of sharp and blurry textures that appear on three-dimensional surfaces perpendicular to the viewing direction. Trilinear filtering, using three variants of the same texture, allows for smoother transitions.

However, as a result of using both technologies, only those textures that are perpendicular to the axis of view look really clear. When viewed from an angle, they are very blurred. To prevent this, anisotropic filtering is used.

This texture filtering method is set in the video adapter driver settings or directly in the computer game. In addition, you can change the strength of anisotropic filtering: 2x, 4x, 8x or 16x - the more "X"s, the clearer the images on inclined surfaces will be. But with an increase in the filtering strength, the load on the video card increases, which can lead to a decrease in the speed of work and a decrease in the number of frames generated per unit of time.

Various additional effects can be used at the texturing stage. For example, environmental mapping (Enironmental Mapping) allows you to create surfaces that will reflect the game scene: mirrors, shiny metal objects, etc. Another impressive effect is obtained with the use of bump mapping (Bump Mapping), due to which the light falling on the surface at an angle creates the appearance of a relief.
Texturing is the last stage of rendering, after which the image enters the frame buffer of the video card and is displayed on the monitor screen.

Electronic components of the video card

Now that it has become clear how the process of building a three-dimensional image takes place, we can list the technical characteristics of the video card components that determine the speed of the process. The main components of a video card are the graphics processor (GPU - Graphics Processing Unit) and video memory.

GPU

One of the main characteristics of this component (as well as the central processing unit of a PC) is the clock frequency. Ceteris paribus, the higher it is, the faster the data is processed, and therefore the number of frames per second (FPS - frames per second) in computer games increases. GPU frequency is an important, but not the only parameter that affects its performance - modern models manufactured by Nvidia and ATI, which have a comparable level of performance, are characterized by different GPU frequencies.

Nvidia adapters with high performance are characterized by GPU clock speeds from 550 MHz to 675 MHz. The frequency of the graphics processor is less than 500 MHz have "middling" and cheap low-performance cards.
At the same time, GPUs from ATI's "top" cards have frequencies from 600 to 800 MHz, and even the cheapest video adapters do not drop below 500 MHz.

However, while Nvidia's GPUs are slower than ATI's GPUs, they provide at least the same level of performance, and often better. The fact is that other characteristics of the GPU are no less important than the clock frequency.

1. The number of texture modules (TMU - Texture Mapping Units) - GPU elements that perform texture mapping on triangles. The speed of building a three-dimensional scene directly depends on the number of TMUs.
2. The number of rendering pipelines (ROP - Render Output Pipeline) - blocks that perform "service" functions (a couple of examples, pls). Modern GPUs tend to have fewer ROPs than texture units, and this limits overall texturing speed. For example, the Nvidia GeForce 8800 GTX video card chip has 32 texture units and 24 ROPs. The processor of the ATI Radeon HD 3870 video card has only 16 texture models and 16 ROPs.

The performance of texture modules is expressed in such a value as fillrate - texturing speed, measured in texels per second. The GeForce 8800 GTX video card has a fill rate of 18.4 billion tex/s. But a more objective indicator is the fill rate, measured in pixels, since it reflects the speed of the ROP. For the GeForce 8800 GTX, this value is 13.8 billion pixels/s.
3. The number of shader units (shader processors) that - as the name suggests - handle pixel and vertex shaders. Modern games make heavy use of shaders, so the number of shader units is critical to performance.

Not so long ago, GPUs had separate modules for running pixel and vertex shaders. Nvidia's GeForce 8000 series graphics cards and ATI's Radeon HD 2000 adapters were the first to adopt a unified shader architecture. The graphics processors of these cards have blocks capable of processing both pixel and vertex shaders - universal shader processors (stream processors). This approach makes it possible to fully utilize the computing resources of the chip for any ratio of pixel and vertex calculations in the game code. In addition, in modern GPUs, shader units often operate at a frequency higher than the GPU clock frequency (for example, in the GeForce 8800 GTX this frequency is 1350 MHz versus the "general" 575 MHz).

Please note that Nvidia and ATI count the number of shader processors in their chips differently. For example, the Radeon HD 3870 has 320 such blocks, while the GeForce 8800 GTX has only 128. In fact, ATI indicates their component components instead of whole shader processors. Each shader processor contains five components, so the total number of shader units in the Radeon HD 3870 is only 64, which is why this video card is slower than the GeForce 8800 GTX.

Video card memory

Video memory in relation to the GPU performs the same functions as RAM - in relation to the central processor of a PC: it stores all the "building material" necessary to create an image - textures, geometric data, shader programs, etc.

What video memory characteristics affect the performance of a graphics card

1. Volume. Modern games use a huge amount of high-resolution textures, and their placement requires a corresponding amount of video memory. Most of the "top" video adapters and cards of the middle price range produced today are supplied with 512 MB of memory, which cannot be increased later. Cheaper video cards are equipped with half the amount of memory, for modern games it is no longer enough.

In case of low memory, the GPU is forced to constantly load textures from the PC's RAM, which communicates much more slowly, as a result, performance can noticeably decrease. On the other hand, an excessively large amount of memory may not give any increase in speed, since the extra "space" simply will not be used. Buying a video adapter with 1 GB of memory makes sense only if it belongs to the "top" products (ATI Radeon HD 4870, Nvidia GeForce 9800, and the latest GeForce GTX 200 series cards).

2. Frequency. This parameter for modern video cards can vary from 800 to 3200 MHz and depends, first of all, on the type of memory chips used. DDR 2 chips can provide an operating frequency of up to 800 MHz and are used only in the cheapest graphics adapters. GDDR 3 and GDDR 4 memory increase the frequency range up to 2400 MHz. The latest ATI Radeon HD 4870 graphics cards use GDDR-5 memory at a fantastic 3200 MHz.

The memory frequency, like the GPU frequency, has a big impact on the performance of the video card in games, especially when using full-screen anti-aliasing. Ceteris paribus, the higher the memory frequency, the higher the performance, because. the GPU will be less "idle" waiting for data to arrive. The memory frequency of 1800 MHz is the lower limit that separates high performance cards from less fast ones.

3. The bit width of the video memory bus has a much stronger effect on the overall performance of the card than the memory frequency. It shows how much data the memory can transfer in one clock cycle. Accordingly, doubling the memory bus width is equivalent to doubling its clock frequency. Most modern video cards have a 256-bit memory bus. Reducing the bit depth to 128 or, moreover, to 64 bits deals a strong blow to performance. On the other hand, in the most expensive video cards, the bus can be "extended" to 512 bits (so far only the latest GeForce GTX 280 can boast of this), which turns out to be very handy, taking into account the power of their graphics processors.

Where to find information about the technical characteristics of the video card

If a graphics card has some outstanding parameters (high clock frequency of the processor and memory, its volume), then they are usually indicated directly on the box. But the most complete specifications for video adapters and the GPUs they are based on can only be found online. General information is posted on corporate websites of GPU manufacturers: Nvidia (www.nvidia.ru) and ATI (www.ati.amd.com/ru). Details can be found on unofficial websites dedicated to video cards - www.nvworld.ru and www.radeon.ru. A good help will be the electronic encyclopedia Wikipedia (www.ru.wikipedia.org). Users who buy a card with an eye on overclocking can use the resource www.overclockers.ru.

Simultaneous use of two video cards

In order to get maximum performance, you can install two video cards in your computer at once. Manufacturers have provided appropriate technologies for this - SLI (Scalable Link Interface, used by Nvidia cards) and CrossFire (developed by ATI). In order to take advantage of them, the motherboard must not only have two PCI-E slots for video cards, but also support one of these technologies. Many "motherboards" on Intel chipsets can use ATI boards in the CrossFire mode, but only boards based on chipsets from the same company can combine two (or even three!) Nvidia video cards into one "team". If the motherboard does not support these technologies, two video cards will be able to work with it, but only one will be used in games, and the second will only make it possible to display an image on a pair of additional monitors.
Note that using two video cards does not double the performance. The average result that you should count on is a 50% increase in speed. In addition, the full potential of the tandem will be revealed only when using a powerful central processor and a high-resolution monitor.

What are shaders

Shaders are microprograms present in the game code that can be used to change the process of building a virtual scene, opening up possibilities that are unattainable using traditional 3D rendering tools. Modern game graphics without shaders is unthinkable.

Vertex shaders change the geometry of 3D objects, making it possible to realize natural animation of complex models of game characters, physically correct deformation of objects, or real water waves. Pixel shaders are used to change the color of pixels and allow you to create effects such as realistic circles and ripples on the water, complex lighting and surface relief. In addition, with the help of pixel shaders, the frame is post-processed: all kinds of “cinematic” effects of blurring moving objects, super-bright light, etc.

There are several versions of Shader Model implementation. All modern video cards support pixel and vertex shaders version 4.0, which provide higher realism of effects compared to the previous - third version. Shader Model 4.0 is supported by the DirectX 10 API, which runs exclusively on Windows Vista. In addition, the computer games themselves must be “sharpened” for DirectX 10.

Do I need an AGP video card for an old system

If your PC motherboard is equipped with an AGP port, the video card upgrade options are very limited. The maximum that the owner of such a system can afford is video cards of the Radeon HD 3850 series from AMD (ATI).

By today's standards, they perform below average. In addition, the vast majority of motherboards with AGP interface support are designed for the outdated Intel Pentium 4 and AMD Athlon XP processors, so the overall system performance will still not be high enough for modern 3D graphics. Only motherboards for AMD Ahtlon 64 processors with Socket 939 should be equipped with new video cards with an AGP port. In all other cases, it is better to buy a new computer with a PCI-E interface, DDR 2 (or DDR 3) memory and a modern CPU.

Material tags: graphic card, video, card, accelerator, graphics

Hello, dear users and lovers of computer hardware. Today we will discuss what integrated graphics in a processor is, why it is needed at all, and whether such a solution is an alternative to discrete, that is, external video cards.

If you think from the point of view of engineering intent, then the integrated graphics core, widely used in their products by Intel and AMD, is not a video card as such. This is a video chip that was integrated into the CPU architecture to perform the basic duties of a discrete accelerator. But let's deal with everything in more detail.

From this article you will learn:

History of appearance

Companies first started putting graphics on their own chips in the mid-2000s. Intel started development with the Intel GMA, however this technology showed itself rather weakly, and therefore was unsuitable for video games. As a result, the famous HD Graphics technology is born (at the moment the latest representative of the line is HD graphics 630 in the eighth generation of Coffee Lake chips). The video core made its debut on the Westmere architecture, as part of Arrandale mobile chips and desktop ones - Clarkdale (2010).

AMD went the other way. First, the company bought out ATI Electronics, the once-cool graphics card maker. Then she began to tinker with her own AMD Fusion technology, creating her own APUs - CPU with integrated video core (Accelerated Processing Unit). The first generation chips made their debut as part of the Liano architecture, and then Trinity. Well, the graphics Radeon r7 series for a long time registered in the composition of laptops and netbooks of the middle class.

Advantages of Embedded Solutions in Games

So. Why do we need an integrated card and what are its differences from a discrete one.

We will try to make a comparison with an explanation of each position, making everything as argumentative as possible. Let's start, perhaps, with such a characteristic as performance. We will consider and compare the most relevant solutions at the moment from Intel (HD 630 with a graphics accelerator frequency from 350 to 1200 MHz) and AMD (Vega 11 with a frequency of 300‑1300 MHz), as well as the advantages that these solutions provide.
Let's start with the cost of the system. Integrated graphics allow you to save a lot on the purchase of a discrete solution, up to $ 150, which is critical when creating the most economical PC for office and use.

The frequency of the AMD graphics accelerator is noticeably higher, and the performance of the adapter from the red ones is significantly higher, which indicates the following indicators in the same games:

A game Settings Intel AMD
PUBG Full HD Low 8–14 fps 26–36 fps
gta v Full HD Medium 15–22 fps 55–66 fps
Wolfenstein II HD, low 9–14 fps 85–99 fps
Fortnite Full HD Medium 9–13 fps 36–45 fps
Rocket League FullHD, high 15–27 fps 35–53 fps
CS:GO FullHD, maximum 32–63 fps 105–164 fps
Overwatch Full HD Medium 15–22 fps 50–60 fps

As you can see, Vega 11 is the best choice for inexpensive "gaming" systems, since the performance of the adapter in some cases reaches the level of a full-fledged GeForce GT 1050. And in most network battles, it performs perfectly.

At the moment, only the AMD Ryzen 2400G processor comes with this graphics, but it's definitely worth a look.

Option for office tasks and home use

What requirements do you most often put forward to your PC? If we exclude games, we get the following set of parameters:

  • watching movies in HD quality and videos on Youtube (FullHD and in rare cases 4K);
  • work with the browser;
  • listening to music;
  • communication with friends or colleagues using instant messengers;
  • Application Development;
  • office tasks ( Microsoft office and similar programs).

All these items work great with the integrated graphics core at resolutions up to FullHD.
The only nuance that must be taken into account without fail is the support for the video outputs of that motherboard on which you are going to put the processor. Check this point in advance so that there are no problems in the future.

Disadvantages of integrated graphics

Since we figured out the pros, you need to work out the disadvantages of the solution.

  • The main disadvantage of such an undertaking is performance. Yes, you can play more or less modern toys at low and high settings with a clear conscience, but graphics lovers will definitely not like this idea. Well, if you work with graphics professionally (processing, rendering, video editing, post-production), and even on 2-3 monitors, then the integrated video type will definitely not suit you.

  • Moment number 2: lack of own high-speed memory (in modern cards these are GDDR5, GDDR5X and HBM). Formally, the video chip can use up to 64 GB of memory, but where will all of it come from? That's right, from the operational. This means that it is necessary to build the system in advance in such a way that there is enough RAM for both work and graphics tasks. Keep in mind that the speed of modern DDR4 modules is much lower than GDDR5, and therefore more time will be spent on data processing.
  • The next drawback is heat dissipation. In addition to its own cores, another one appears in the process, which, in theory, warms up no less. You can cool all this splendor with a boxed (complete) turntable, but get ready for periodic underestimation of frequencies in especially complex calculations. Buying a more powerful cooler solves the problem.
  • Well, the last nuance is the impossibility of upgrading the video without replacing the processor. In other words, to improve the integrated video core, you will literally have to buy a new processor. Dubious benefit, isn't it? In this case, it is easier to purchase a discrete accelerator after a while. Manufacturers like AMD and nVidia offer great solutions for every taste.

Results

Integrated graphics are a great option in 3 cases:

  • you need a temporary video card, because there was not enough money for an external one;
  • the system was originally conceived as extra-budgetary;
  • you are building a home multimedia station (HTPC) that focuses on the embedded core.

We hope that one problem in your head has become less, and now you know why manufacturers create their APUs.

In the following articles, we will talk about such terms as virtualization and not only. Follow to keep abreast of all relevant topics related to iron.

The integrated graphics processor plays an important role for both gamers and undemanding users.

The quality of games, movies, watching videos on the Internet and images depends on it.

Principle of operation

The GPU is integrated into motherboard computer - this is what the built-in graphic looks like.

As a rule, they use it to remove the need to install a graphics adapter -.

This technology helps to reduce the cost of the finished product. In addition, due to the compactness and low power consumption of such processors, they are often installed in laptops and low-power desktop computers.

Thus, integrated graphics processors have filled this niche so much that 90% of laptops on US store shelves have just such a processor.

Instead of a conventional video card in integrated graphics, the computer's RAM itself often serves as an auxiliary tool.

True, this solution somewhat limits the performance of the device. Yet the computer itself and the GPU use the same bus for memory.

So such a “neighborhood” affects the performance of tasks, especially when working with complex graphics and during gameplay.

Kinds

Integrated graphics have three groups:

  1. Shared-memory graphics is a device based on shared memory management with the main processor. This greatly reduces the cost, improves the energy saving system, but degrades performance. Accordingly, for those who work with complex programs, integrated GPUs of this kind are more likely to not work.
  2. Discrete graphics - a video chip and one or two video memory modules are soldered on the motherboard. Thanks to this technology, image quality is significantly improved, and it also becomes possible to work with three-dimensional graphics with the best results. True, you will have to pay a lot for this, and if you are looking for a high-performance processor in all respects, then the cost can be incredibly high. In addition, the electricity bill will rise slightly - the power consumption of discrete GPUs is higher than usual.
  3. Hybrid discrete graphics - a combination of the two previous types, which ensured the creation PCI bus express. Thus, access to the memory is carried out both through the soldered video memory and through the operational one. With this solution, the manufacturers wanted to create a compromise solution, but it still does not eliminate the shortcomings.

Manufacturers

As a rule, large companies are engaged in the manufacture and development of embedded graphics processors -, and, but many small enterprises are also connected to this area.

It's easy to do. Look for Primary Display or Init Display First. If you do not see something like this, look for Onboard, PCI, AGP or PCI-E (it all depends on the installed buses on the motherboard).

By selecting PCI-E, for example, you enable the PCI-Express video card, and disable the built-in integrated one.

Thus, to enable the integrated video card, you need to find the appropriate parameters in the BIOS. Often the activation process is automatic.

Disable

Disabling is best done in BIOS. This is the simplest and most unpretentious option, suitable for almost all PCs. The only exceptions are some laptops.

Again, find Peripherals or Integrated Peripherals in BIOS if you are working on a desktop.

For laptops, the name of the function is different, and not the same everywhere. So just look for something related to graphics. For example, the desired options can be placed in the Advanced and Config sections.

Shutdown is also carried out in different ways. Sometimes it is enough just to click “Disabled” and set the PCI-E video card to the first in the list.

If you are a laptop user, don't be alarmed if you cannot find a suitable option, you may not have such a function a priori. For all other devices, the same rules are simple - no matter how the BIOS itself looks, the filling is the same.

If you have two video cards and they are both shown in the device manager, then the matter is quite simple: right-click on one of them and select “disable”. However, keep in mind that the display may go out. And, most likely, it will.

However, this is also a solvable problem. It is enough to restart the computer or by.

Perform all subsequent settings on it. If this method does not work, roll back your actions using safe mode. You can also resort to the previous method - through the BIOS.

Two programs - NVIDIA Control Center and Catalyst Control Center - configure the use of a specific video adapter.

They are the most unpretentious in comparison with the other two methods - the screen is unlikely to turn off, you will not accidentally knock down the settings through the BIOS either.

For NVIDIA, all settings are in the 3D section.

You can select your preferred video adapter for all operating system, and for certain programs and games.

In the Catalyst software, an identical function is located in the "Power" option under the "Switchable Graphics" sub-item.

Thus, switching between GPUs is not difficult.

There are different methods, in particular, both through programs and through BIOS. Turning on or off one or another integrated graphics may be accompanied by some failures, mainly related to the image.

It may go out or just appear distorted. Nothing should affect the files themselves in the computer, unless you clicked something in the BIOS.

Conclusion

As a result, integrated graphics processors are in demand due to their cheapness and compactness.

For this, you will have to pay the level of performance of the computer itself.

In some cases, integrated graphics are simply necessary - discrete processors are ideal for working with three-dimensional images.

In addition, the industry leaders are Intel, AMD and Nvidia. Each of them offers its own graphics accelerators, processors and other components.

The latest popular models are Intel HD Graphics 530 and AMD A10-7850K. They are quite functional, but have some flaws. In particular, this applies to the power, performance and cost of the finished product.

You can enable or disable a graphics processor with a built-in kernel, or you can do it yourself through BIOS, utilities and various programs, but the computer itself may well do it for you. It all depends on which video card is connected to the monitor itself.

The graphics processing unit (GPU) is no less important component of the SoC of a mobile device than (CPU). Over the past five years, the rapid development of Android and iOS mobile platforms has spurred mobile GPU developers, and today no one is surprised by mobile games with PlayStation 2 or even higher 3D graphics. I devoted the second article of the series “Likbez on mobile hardware” to graphic processors.

Currently, most of the graphics chips are produced using cores: PowerVR (Imagination Technologies), Mali (ARM), Adreno (Qualcomm, formerly ATI Imageon) and GeForce ULP (nVIDIA).

PowerVR is a division of Imagination Technologies, which in the recent past developed graphics for desktop systems, but was forced to leave this market under pressure from ATI and nVIDIA. Today, PowerVR develops perhaps the most powerful GPUs for mobile devices. PowerVR chips are used in the production of processors by companies such as Samsung, Apple, Texas Instruments, etc. For example, different revisions of PowerVR GPUs are installed in all generations of the Apple iPhone. Chip series 5 and 5XT remain relevant. The fifth series includes single-core chips: SGX520, SGX530, SGX531, SGX535, SGX540 and SGX545. 5XT series chips can have from 1 to 16 cores: SGX543, SGX544, SGX554. The specifications of the 6 series (Rogue) are still being specified, but the performance range of the series chips is already known - 100-1000GFLOPS.

Mali are GPUs developed and licensed by the British ARM. Mali chips are part of various SoCs manufactured by Samsung, ST-Ericsson, Rockchip and others. For example, Mali-400 MP is part of Samsung Exynos 421x SoCs used in smartphones such as samsung galaxy SII and SIII, in two generations of “smartphone-tabletmash?” Samsung note. Relevant today is the Mali-400 MP in dual and quad-core versions. The Mali-T604 and Mali-T658 chips are on the way, the performance of which is up to 5 times higher than that of the Mali-400.

Adreno are graphics chips developed by Qualcomm's eponymous division. The name Adreno is an anagram for Radeon. Prior to Qualcomm, the division was owned by ATI, and the chips were called Imageon. Over the past few years, Qualcomm has used 2xx series chips in the production of SoCs: Adreno 200, Adreno 205, Adreno 220, Adreno 225. The last one on the list is a very fresh chip - made using 28nm technology, the most powerful of the Adreno 2xx series. Its performance is 6 times higher than that of the “old man” Adreno 200. In 2013, more and more devices will receive Adreno 305 and Adreno 320 GPUs. 2 times more powerful than the 225th.

GeForce ULP (ultra-low power) is a mobile version of the video chip from nVIDIA, included in the Tegra system-on-a-chip of all generations. One of Tegra's most important competitive advantages is specialized content that is only for devices based on this SoC. nVIDIA has traditionally had a close relationship with game developers, and their Content Development team works with them to optimize games for GeForce graphics solutions. To access such games, nVIDIA has even launched the Tegra Zone Android app, a specialized Android Market equivalent where you can download Tegra-optimized apps.

GPU performance is usually measured in three dimensions:

– number of triangles per second, usually in millions – Mega (MTriangles/s);

- the number of pixels per second, usually in millions - Mega (MPixel / s);

- the number of floating point operations per second, usually in billions - Giga (GFLOPS).

On "flops" a little explanation is required. FLOPS (FLoating-point Operations Per Second) is the number of computational operations or instructions performed on floating point (comma) operands per second. A floating point operand is a non-integer number (it would be more correct to say “floating point”, because the comma is the sign separating the integer part of the number from the fractional one in Russian). Understanding which graphics processor is installed in your smartphone will help ctrl+F and the table below. Please note that the GPUs of different smartphones operate at different frequencies. To calculate the performance in GFLOPS for a specific model, you need to divide the number indicated in the “performance in GFLOPS” column by 200 and multiply by the frequency of a single GPU (for example, in the Galaxy SIII, the GPU runs at 533 MHz, which means 7.2 / 200 * 533 = 19.188) :

Smartphone/tablet name CPU GPU Performance in GFLOPS
Samsung Galaxy S4 Samsung Exynos 5410 PowerVR SGX544MP3 21.6 @200MHz
HTC One Qualcomm Snapdragon 600 APQ8064T Adreno 320 20.5 @200MHz
Samsung Galaxy S III, Galaxy Note II, Galaxy Note 10.1 Samsung Exynos 4412 Mali-400 MP4 7.2 @200MHz
Samsung Chromebook XE303C12, Nexus 10 Samsung Exynos 5250 Mali-T604 MP4 36 @200MHz
Samsung Galaxy S II, Galaxy Note, Tab 7.7, Galaxy Tab 7 Plus Samsung Exynos 4210 Mali-400 MP4 7.2 @200MHz
Samsung Galaxy S, Wave, Wave II, Nexus S, Galaxy Tab, Meizu M9 Samsung Exynos 3110 PowerVR SGX540 3.2 @200MHz
Apple iPhone 3GS, iPod touch 3gen Samsung S5PC100 PowerVR SGX535 1.6 @200MHz
LG Optimus G, Nexus 4, Sony Xperia Z Qualcomm APQ8064(Krait cores) Adreno 320 20.5 @200MHz
HTC one XL, Nokia Lumia 920, Lumia 820, Motorola Razr HD, Razr M, Sony Xperia V Qualcomm MSM8960(Krait cores) Adreno 225 12.8 @200MHz
HTC one s, Windows phone 8x, Sony Xperia TX/T Qualcomm MSM8260A Adreno 220 ~8.5* @200MHz
HTC Desire S, Incredible S, Desire HD, Sony Ericsson Xperia Arc, Nokia Lumia 800, Lumia 710 Qualcomm MSM8255 Adreno 205 ~4.3* @200MHz
Nokia Lumia 610, LG P500 Qualcomm MSM7227A Adreno 200 ~1.4* @128MHz
Motorola milestone, Samsung i8910, Nokia N900 TI OMAP3430 PowerVR SGX530 1.6 @200MHz
Samsung Galaxy Nexus, Huawei Ascend P1, Ascend D1, Amazon Kindle Fire HD 7″ TI OMAP4460 PowerVR SGX540 3.2 @200MHz
R.I.M. blackberry Playbook, LG Optimus 3D P920, Motorola ATRIX 2, Milestone 3, RAZR, Amazon Kindle Fire first and second generations TI OMAP4430 PowerVR SGX540 3.2 @200MHz
Motorola Defy, Milestone 2, Cliq 2, Defy+, Droid X, Nokia N9, N950, LG optimus black, Samsung Galaxy S scLCD TI OMAP3630 PowerVR SGX530 1.6 @200MHz
Acer Iconia Tab A210/A211/A700/A701/A510, ASUS Transformer Pad, Google Nexus 7, Eee Pad Transformer Prime, Transformer Pad Infinity, Microsoft surface, Sony Xperia Tablet S, HTC OneX/X+, LG Optimus 4X HD, Lenovo IdeaPad Yoga nVidia Tegra 3 GeForce ULP 4.8 @200MHz
Acer Iconia Tab A500, Iconia Tab A501, Iconia Tab A100, ASUS Eee Pad Slider, Eee Pad Transformer, HTC Sensatoin/XE/XL/4G, Lenovo IdeaPad K1, ThinkPad Tablet, LG Optimus Pad, Optimus 2X, Motorola Atrix 4G, Electrify, Photon 4G, Xoom, Samsung Galaxy Tab 10.1, Galaxy Tab 8.9, Sony Tablet P, Tablet S nVidia Tegra 2 GeForce ULP 3.2 @200MHz
Apple iPhone 5 Apple A6 PowerVR SGX543MP3 19.2 @200MHz
Apple iPad 2, iPhone 4S, iPod touch 5gen, iPad mini Apple A5 PowerVR SGX543MP2 12.8 @200MHz
Apple iPad, iPhone 4, iPod touch 4gen Apple A4 PowerVR SGX535 1.6 @200MHz

* - data are approximate.

Here is another table with the absolute performance values ​​of the most popular smartphones in the upper price range:

* - unofficial data.

The power of mobile graphics is growing year by year. Already this year we can see PS3/X-Box360 level games in top smartphones. Simultaneously with the power, the power consumption of SoCs is growing strongly and the autonomy of mobile devices is indecently reduced. Well, let's wait for a breakthrough in the production of power supplies!

Another energy eater in a modern mobile device is, of course, the display. Screens in mobile phones are getting prettier. The displays of smartphones released with a difference of only a year differ dramatically in picture quality. In the next article of the cycle, I will talk about displays: what types they are, what is PPI, what determines power consumption, and so on.

CPUs and GPUs are very similar, both are made from hundreds of millions of transistors and can process thousands of operations per second. But how exactly do these two important components of any home computer differ?

In this article, we will try to tell in a very simple and accessible way what is the difference between a CPU and a GPU. But first we need to consider these two processors separately.

The CPU (Central Processing Unit or Central Processing Unit) is often referred to as the "brain" of a computer. There are about a million transistors inside the central processing unit, with the help of which various calculations are performed. Home computers typically have processors having 1 to 4 cores with clock speeds of approximately 1 GHz to 4 GHz.

The processor is powerful because it can do everything. A computer is capable of performing a task because the processor is capable of performing that task. Programmers have been able to achieve this thanks to the wide instruction sets and huge lists of functions shared across modern CPUs.

What is a GPU?

GPU (Graphics Processing Unit or Graphic Processing Unit) is a specialized type of microprocessor optimized for very specific computing and graphics display. The GPU runs at a lower clock speed than the CPU, but has many more processor cores.

You can also say that the GPU is a specialized CPU made for one specific purpose - video rendering. During rendering, the GPU performs simple mathematical calculations a huge number of times. The GPU has thousands of cores that will work at the same time. Although each GPU core is slower than the CPU core, it is still more efficient for performing the simple mathematical calculations needed to display graphics. This massive parallelism is what makes the GPU capable of rendering the complex 3D graphics required by modern games.

Difference between CPU and GPU

The GPU can only do a subset of what the CPU can do, but it does so at an incredible speed. The GPU will use hundreds of cores to perform time-critical calculations on thousands of pixels and render complex 3D graphics in the process. But to achieve high speeds, the GPU must perform repetitive operations.

Take, for example, Nvidia GTX 1080. This video card has 2560 shader cores. Thanks to these cores, the Nvidia GTX 1080 can execute 2560 instructions or operations in a single clock cycle. If you want to make the picture 1% brighter, then the GPU can handle it without much difficulty. And here is the quad-core CPU Intel Core i5 will only be able to execute 4 instructions per clock cycle.

However, CPUs are more flexible than GPUs. Central processing units have a larger set of instructions, so they can perform a wider range of functions. Also CPUs run at higher maximum clock speeds and have the ability to control the input and output of computer components. For example, the CPU can integrate with virtual memory, which is required to run a modern operating system. This is exactly what the GPU will not be able to do.

GPU computing

Even though GPUs are designed for rendering, they are capable of more. Graphics processing is just a kind of repetitive parallel computing. Other tasks such as Bitcoin mining and password cracking rely on the same kinds of massive datasets and simple mathematical calculations. That is why some users use video cards for non-graphical operations. This phenomenon is called GPU Computation or GPU computing.

conclusions

In this article, we compared CPU and GPU. I think it has become clear to everyone that the GPU and CPU have similar goals, but are optimized for different calculations. Write your opinion in the comments, I will try to answer.

Loading...