Why should Intel spare no effort on the way of graphics card?

Yes, most of us know Intel only at the processor level. But if I told you that Intel is also one of the top GPU manufacturers in the world, you would think I was dreaming. Isn’t the GPU manufacturer AMD and NVIDIA? When did Intel get involved? In fact, the GPU we are talking about is not a widely recognized independent graphics card, because GPU is a broad concept, and independent graphics card is only a part of GPU. < / P > < p > it is true that Intel has not made significant achievements in making independent graphics cards before. In 1998, i740, which cooperated with real3d, was a flash in the pan, and there was no follow-up. Ten years later, the ambitious Larrabee GPU was cancelled because it failed to meet the established technical indicators. However, the reason why Intel is one of the top GPU manufacturers today is naturally due to the Intel Core graphics card widely integrated in the processor. When it comes to “integrated graphics card”, people often look down upon it. Once upon a time, the performance of the integrated graphics card really pulled the hip, even if it was hard to solve 1080p, let alone rendering and playing games. For a long time, people’s disapproval of lightweight notebook computers is partly due to the weak processor performance, and a greater part is due to poor graphics performance, which makes the lightweight notebook become the synonym of “chicken ribs” in the eyes of many users. The opportunity of < / P > < p > occurred around 2010, when Intel was vigorously promoting the concept of “true convergence”. That is to cancel the north bridge which mainly carries the GPU chip in the past, and integrate the CPU and GPU into the package of the whole processor. This design will directly erase the chip of North Bridge, which has a large heating capacity, takes up a certain motherboard area, and needs additional communication channels, so as to make the notebook computer motherboard smaller and heat dissipation pressure smaller, and improve the performance to a certain extent. < / P > < p > the core graphics card has been brilliant since the well-known sandy bridge platform, which is the second generation core processor. In the same year, Intel ultrabook project was officially launched. And later on, many of us became witnesses and witnesses. < p > < p > with the gradual success and popularity of ultrabook, the core graphics card has been continuously improved in the past decade, and Intel has also launched iris, iris plus sharp core display with performance comparable to the entry-level independent graphics card. Its amazing graphics performance completely reverses the user’s cognition of “integrated graphics card”. < / P > < p > now, do you know why Intel is one of the top GPU manufacturers? In the world, the market share of various types of PC based on Intel Core platform is very high, and each core platform PC has an Intel HD graphics chip. Although the performance is not as good as the powerful independent graphics card, Intel has obvious advantages in coverage and market share. < / P > < p > seeing here, I think you also understand that Intel has a large number of graphics cards, but its performance is not good enough. Therefore, to make up for this short board of performance has always been what Intel wants to do. This can be explained by iris sharp core display. < / P > < p > then some partners may ask, what is the significance of improving the performance of the kernel display? Play a game directly choose independent display, do not play the game, then the performance of the video card does not matter. < p > < p > for notebook computers, lightweight and thin is the general direction of development. But at the same time of lightweight and thinness, notebook computers need to have enough performance that users can trust. Imagine if a 15 or 17 inch Laptop weighs only 1kg, its thickness is less than 1.5cm, and it can play large-scale 3A games smoothly. For many users, this is actually the ultimate form of the notebook: portable and easy to carry, with a large screen and more comprehensive functions. Why not? < / P > < p > today, a lightweight version with an iris plus core can run at 4K resolution with a fluency higher than 40fps at the highest picture quality. It was very hard to imagine a few years ago. Because the independent graphics card at that time could not drive the PC system running smoothly at 4K resolution, let alone open the game and play smoothly. However, no matter how the kernel display technology develops, at present, it can only compete with the entry-level independent display, which is far from enough for Intel. < p > < p > as a result, Intel invited Raja koduri, and under his leadership, developed the Xe GPU architecture which is quite expected. < / P > < p > the great thing about Xe architecture is that one architecture can adapt to a variety of applications. It covers the most high-end HPC to low-power notebook computers, and is divided into three levels for different positioning products Xe HPC, Xe HP and Xe LP. In the Xe HPC architecture, the EU unit connects HBM high bandwidth video memory through xemf bus, and integrates a large capacity consistency cache “Rambo”, which can be accessed by CPU and GPU. By connecting multiple GPUs together, it provides greater video memory bandwidth and fp64 floating-point performance, and supports video memory / cache ECC error correction and strong level RAS. Like Xe HP architecture, it is a super high performance GPU architecture for data center, server and workstation level. < / P > < p > Second, it supports double precision floating-point units. Although the demand of general game graphics for double precision computing is not high, the demand in the traditional high-performance computing market is quite strong, so the double precision computing unit is still an indispensable element in the high-performance general GPU architecture. < / P > < p > Xe LP architecture has the advantages of low power consumption and high performance, and it is mainly used in games, video cards, laptops and other devices. At present, Intel has announced the first Xe architecture graphics card code named DG1. Its performance conclusion is between GTX 950 and GTX 1050, but the power consumption is less than 75W, which is very suitable for the core graphics card of the next generation notebook computer. < / P > < p > for ordinary users, the earliest time we can contact with Xe architecture graphics card should be in September or October this year, because according to the Convention, Intel will release a new core platform low-power processor, namely the 11th generation tiger Lake processor. Previously, Intel has released the tiger Lake Xe core display performance demonstration, which was once a killer video card game, and can run above 30fps under the highest image quality. It can be seen that the performance improvement of Xe architecture core display can be described as “leap”. < / P > < p > in addition to giving more powerful graphics performance and enhancing the ubiquity of thin and thin notebooks, Intel will also set foot in the field of independent graphics cards through Xe architecture, which will impact the existing independent graphics card market structure led by AMD and NVIDIA. This will have a profound impact on the development of the PC industry in the next decade, and will also open up a new development path for Intel 。 gather and watch! Huawei P40 Pro evaluation: excellent mobile phone photography elegant design, do you like it?