Intel Arc graphics cards Review

Why is the architecture from Intel interesting, what problems does the company have to overcome, and how will the emergence of a third major player affect the video card market?

On August 16, 2021, Intel announced its first line of Intel Arc discrete graphics cards based on the Xe Alchemist architecture. After a very long hiatus (the company’s previous 3D accelerator, Intel i740, was released already in 1998), Intel plans to return to the most important market for gamers, and immediately take a head start: the new product will support all DirectX 12 Ultimate functions, including hardware accelerated ray tracing. and will even get its own XeSS image reconstruction algorithm to increase performance. So far, alas, there are no test samples of video cards, but there is enough information about the network for the first analysis of the new product.

Not the first try

Intel’s first external graphics card – 1998 i740

Intel has already tried to enter the gaming graphics market. In 1998, she released the i740, which supported DirectX 5.0 and OpenGL 1.1, ran at 220 MHz, and came with 2 to 8 MB of memory. By that time, cards with a much larger (up to 32 MB) memory capacity had already begun to appear, but Intel relied on the AGP bus, which was new at that time, with which the card was supposed to use the system memory as its own. The i740 turned out to be a very problematic product – the performance of the card suffered from a lack of built-in memory, the performance of the central processor suffered from the need to share the RAM and its bandwidth, and then and then the driver was crippled, and the image on the screen suffered from poor quality digital-to-analog converter. Its useful to note,

New architecture

The “newest” history of Intel graphics began with the Intel Iris Xe DG1 already on the market. It is a solution for businesses looking for cheap graphics cards for simple workstations. You shouldn’t expect impressive gaming results from the DG1, but this product is important for Intel in order to test the market and start testing the architecture.

Now Intel has announced 4 lines of discrete graphics cards. The professional segment includes Intel Xe HPC (High Performance Computing) with a focus on computing tasks and Xe HP with scalability for supercomputers and servers. And for ordinary users, the company will offer Xe LP (Low Power) with low power consumption for computers that do not need powerful graphics, and the most interesting for gamers Xe HPG (High Performance Gaming).

Intel’s presentation (see slide above) seems to hint that the Xe HPG gaming line has absorbed all the best solutions from the other three products

Interestingly, in the announcement, Intel is already talking about plans for four generations of the architecture of Arc video cards: Alchemist, on which the first generation of Arc HPG, Battlemage, Celestial and Druid will be released. Codenames explicitly refer to classes in D&D other than Celestial. Celestials are a type of creature in D&D 5e that includes all beings from the upper planes of being, such as angels.

It is difficult to say how much the “generations” of Intel Arc will differ from each other: will these be really big steps forward, like, for example, from GeForce RTX 20X0 to RTX 30X0, or at the start Intel will prefer to update its cards more often – for example, adding those functions that were simply not in time in the first version. So far, it is known that Alchemist will support all modern DirectX 12 Ultimate graphics chips: mesh shading (an analogue of UE5 Nanite, which allows you to create models of lower quality from high-quality ones), VRS (Variable Rate Shading, which allows you to render parts of the screen that the player cannot to see, at a lower resolution, improving performance), hardware accelerated ray tracing, image reconstruction using XeSS neural networks and video upscaling with AI Video Enhance neural networks.

Since we remembered NVIDIA a little higher, then according to rumors, this company is already working on the Hopper architecture, which, instead of large monolithic processors, will use chiplets – large crystals recruited from many modules working in parallel on different tasks. Intel Xe HP is designed with this scalability in mind from the outset. But here it is important to remember that games differ from computational tasks in extremely high sensitivity to latencies, which will be very difficult to avoid when using chiplets. In addition, almost all modern games use historical information about previous frames for anti-aliasing, image reconstruction, global illumination or noise reduction, and the classic technique of rendering frames in turn with different kernels will interfere with these techniques. Most likely,

Driver

Let’s be honest with each other: In the early years, the driver will be the weakest link in Intel Xe graphics cards.

NVIDIA and AMD have been polishing their software for decades, continually improving performance and stability. Equally important, both companies have teams with extensive experience in driver optimization, and the drivers themselves contain huge libraries of optimizations for old games that took years to create. Intel does not yet have such a base, which the head of Intel Xe software for Lisa Pierce openly admitted in an interview with Digital Foundry. Now her team is optimizing drivers for modern graphics APIs – DirectX 11 and 12 and Vulkan, leaving old games on DX9 and 10 and OpenGL running on generic shaders. Not ideal, but with a limited number of engineers and limited time, this is the smartest possible approach. After all, it is the new games that will squeeze all the juices out of iron, and the difference between 600 and 500 FPS in CSGO will be felt only by selected esports players. However, Lisa promised that her team will get to the most popular old games.

Another important issue is the driver’s graphical interface. In 2019, Intel released a new “Graphics Command Center”. It is beautiful and modern, but it lacks many features that have long been in NVIDIA and AMD drivers, such as the choice of the type of vertical sync (double, triple, or, which is very important for many console ports and weak hardware, half the monitor frequency), VRS and anti-aliasing level control, which is especially important for older games. AMD and NVIDIA drivers allow you to record or stream gameplay directly using the built-in tools, and the GeForce Experience utility can apply unique filters and ReShade filters to the game directly without third-party utilities, as well as take screenshots in HDR. The NVIDIA driver, although it uses a Windows XP-era interface,

Intel is definitely working on many of these features, but if any of them can be expected with the launch of the Xe HPG, then many will surely remain in development for years to come. Of course, performance is more important than additional switches that most players will never use. Intel software engineers have taken the development of new discrete graphics drivers very seriously, designing a scalable platform for several years that will stay with Intel for many years and architectures. Lisa Pearce has not yet admitted what changes will affect the recording and streaming of gameplay, but I hope that Intel will prepare a solution comparable to the latest generations of video encoders from NVIDIA, which easily beat competitors in image quality, second only to software encoding on the processor.

Neural networks and rays

It is not for nothing that Intel has been preparing to enter the discrete graphics market for many years – this allowed it to discern trends and see whose approach will win – unique software and NVIDIA silicon or open source and non-specialized AMD hardware – and choose the most optimal option. Intel Xe Alchemist takes the best from the competition: ray tracing, open source software and tricky machine learning tricks.

Although the Intel Xe Alchemist architecture roadmap emphasizes rasterized performance, i.e. on the classic graphics of games, the new video cards will still have hardware ray tracing acceleration units. Of course, as in the case of NVIDIA, real-time tracing requires two more closely related technologies – noise reduction or image reconstruction.

Noise reduction is necessary because ray tracing always creates an image with noise (grain), which can be eliminated by increasing the number of rays thrown per pixel (for movies and 3D cartoons of the Pixar level, this is usually more than 8 rays per pixel). Even the most powerful and fastest video cards cannot cope with such a load in real time, so in games they now use values ​​from 1 ray per pixel and less, and they either do not get rid of noise, or use noise reduction. NVIDIA has its own algorithm that uses neural networks and works on tensor cores. Intel has not said anything like that yet, but it is very likely that such developments are underway, because in all interviews and presentations, the Intel Xe team is very fond of talking about how neural networks will become important for games in the future.

4K version of the XeSS demonstration from Digital Foundry

But Intel is happy to talk about XeSS (Xe Super Resolution) – an image reconstruction algorithm using neural networks. In short, this is NVIDIA DLSS, only from Intel. The algorithm renders an image at a lower resolution (for example, a 1080p reconstruction in 4K), collects historical data from previous frames and motion vectors and, based on this information, fills in the gaps in the pixel grid with a neural network trained on games. The demo looks very convincing, but such a demo is easy to tweak, and slow, smooth, uniform camera movements are almost ideal for reconstruction. Well, remembering the development of DLSS, we can say that in the case of such a neural network, the most important thing is practice: the more games XeSS is run in, the better its work will be.

XeSS works in the same way as DLSS: the engine generates frames in low resolution with jitter (shift from frame to frame), the neural network uses these frames, historical data and motion vectors to get an image in the final resolution, on which post effects are already superimposed

Much more interesting is not XeSS itself, but Intel’s promises that this technology will be open and at least in a simplified version will work on any modern video card. In theory, this will allow developers to implement only one universal technology into the game and not worry that it may not work on some hardware. In practice, this statement leaves more questions than answers. What does open mean? If this means “open source”, when any developer can look at the source code, and AMD and NVIDIA can optimize their video cards for work with XeSS, then everything is great. If this means “there will be an SDK (Software Development Kit, a set for implementing technology into games) in the open access” and nothing else, then this is already done by NVIDIA with the DLSS SDK… Third-party facts also hint at the “half-closed” path – XeSS does not use a general API for DirectML machine learning, but a language developed by Intel. In theory, there is a possibility that the XeSS SDK could be a generic interface that will already communicate with an Intel, NVIDIA or AMD driver, but this is unlikely.

Help for gamers or another problem?

Many bloggers and gamers are already anticipating that Intel Arc will finally end the shortage of graphics cards on the market. A logical assumption, considering that Intel is one of the rare companies that both designs and manufactures its own chips. However, Intel Arc chips will be manufactured by the Taiwanese TSMC fab on a 6nm process technology and will almost certainly share production capacity with NVIDIA and AMD. The first, in addition to video cards, also has a Nintendo Switch console in its portfolio, the second has its own processors, as well as the PS5, Xbox Series S and X consoles. Tightening our belts tighter, the deficit continues.

To make matters worse, by the time Intel Arc Alchemist is released, prices for GPUs, central processors, memory controllers, and any silicon chips in general will skyrocket. According to the Wall Street Journal , TSMC plans to raise prices by 10% for chips on 7nm and newer process technology and by 20% when using older technologies. AMD manufactures all of its products at TSMC, including chips for the Xbox Series and PS5 consoles, NVIDIA was also a frequent customer of the Taiwanese manufacturer, but chose to manufacture most of the GeForce 30 series from Korean Samsung Semiconductor. Now Intel is also tied to TSMC prices and capacities, at least for its graphics cards.

On the other hand, Intel will have to work hard at first to convince users to buy their graphics cards. It is unlikely that the company will immediately be able to surpass competitors in the speed of video cards – at first, a good result will simply be to reach the level of their mass solutions, gradually modifying the drivers. Therefore, one powerful tool remains – dumping. Intel is not the poorest company, so it will probably be able to keep prices below competitors for a long time in order to conquer the market. And this is already a serious argument.

By launching its own line of discrete graphics cards, Intel is starting the game on a foreign field. The company has advantages: a well-known name, long-term relationships with partners and system integrators, supply chains and huge capital. But the two main players in this market, NVIDIA and AMD, also have all this (of course, AMD has much less capital than the rest). In order for gamers and enthusiasts to start respecting the Arc brand, Intel needs to at least catch up, but rather overtake competitors in terms of price / performance ratio.

Graphics at Intel is senior vice president Raja Koduri, who was the chief architect of the Radeon Technologies Group. He is a legendary engineer with decades of experience at S3, ATI, AMD and now Intel, developing new technologies and architectures. No one doubts that he is capable of making graphics cards that can compete with the best products of the competition. However, even the best technologies can fall prey to marketing, the marketplace, poor management, and a lack of bosses’ faith in the product line. So far, Intel Arc graphics cards look very promising. But will the company be able to create real competition in the market and give players something new? We’ll find out over time!

Leave a Comment