DF Direct: What's really happening with Intel Arc graphics?
Delays, drivers, XeSS, ray tracing - what to expect from the new hardware.
This week's DF Direct is somewhat different than the usual show - a while back, Intel approached us, asking to be featured on a future episode to talk about its Arc graphics line. At the same time, Intel Fellow Tom Petersen, having seen our extensive image quality comparisons in our FSR 2.0 God of War video, was also keen to see the firm's own XeSS machine learning-based upscaling technology put through its paces in a similarly rigorous fashion. That work is still ongoing and due later this week, but today I'm pleased to share the DF Direct Intel special, filmed in Berlin last week - not least because it sees Petersen addressing all of the concerns we've had about Arc over the last few months.
I'd recommend watching the video through because we've got a lot of ground to cover, kicking off with the delay behind Arc. We spoke to Petersen back in August last year, expecting Arc to arrive in Q1 this year - the perfect time for a new force to enter the graphics market, adding much needed volume during the crypto era... but it never happened, leading some to pronounce the death of Arc in its entirety. That's not happening, the cards are coming soon, but not helping matters was a very strange initial release for the Arc A380 - the first Intel discrete GPU from the Alchemist line-up. Why release the weakest card first? Why release in China first to begin with? Why not sample the press with the product at all? What's the reason behind all of the delays? The answers are illuminating - and perhaps with the benefit of hindsight, things might have been different.
That said, as Tom Petersen mentions, getting some exposure out there for the card has resulted in a wealth of valuable data and telemetry - all of which has fed back to the entire Arc line, benefitting the product that actually launches more widely soon. Some of that feedback, however, does show clear challenges for the Intel team. Two distinct issues have emerged. The most major problem concerns support for older graphics APIs. Intel considers that its DX12 and Vulkan drivers are in good shape - and at their best, show the strengths of the hardware, beating the competition. However, DX11 titles can be problematic. Some games seem to work fine, others have profound problems - simply because Nvidia and AMD have had years to tune their drivers, while developers used those vendors' hardware to create the games in the first place. For a newcomer, that amount of legacy baggage puts Intel at a profound disadvantage and getting those older titles into shape on Arc is a long-term task. Newer APIs though? Intel seems bullish.
Then there's Resizeable BAR - or ReBAR, for short. At a very basic level, it's a feature common to modern Ryzen and Intel CPUs and motherboards that allows for wider, faster transfer of data from system memory to the GPU. Arc's memory controller thrives with ReBAR enabled but is severely disadvantaged if it's turned off - or if your system does not support it at all. Intel is upfront in our interview about this, strongly suggesting that Arc is not for you if you don't have ReBAR on your system. But why develop a memory controller that absolutely requires it in the first place?
While there are problems, the arrival of a new architecture does present opportunities. We spend some time talking about Intel's ray tracing hardware, which its benchmarks suggest is actually more performant than Nvidia's Ampere architecture - impressive, if the game experience taps into that hardware. In addition to that, we spend a lot of time discussing XeSS - Intel's AI upscaling technology - which, again, Intel considers to be best-in-class. This would be quite the feat bearing in mind how much time, money and effort Nvidia has spent on DLSS, but Intel is prepared to put the technology under scrutiny by allowing us free reign with an Arc A770 paired with Shadow of the Tomb Raider, an excellent test case. We'll be reporting on that soon.
There's so much in this discussion to get through, so please check it out. For a while now, we've been convinced that the logical way forward for these AI scaler techniques is to upscale not just in the spatial sense, but also in the temporal dimension too - or to put it more plainly: why not have AI interpolate frames to boost performance? Think of it as a far more complex version of the kind of 'time warp' technology that's been so impactful in the VR space. What's interesting here is that far from ruling it out, Petersen is intrigued by the concept and considers it viable. Think about that for a second - just interpolating every other frame would double your performance. Petersen believes that more than one frame can be interpolated too - meaning that the potential frame-rate multipliers here are absolutely game-changing.
Other great stuff in this interview? We take a quick look at Smooth Sync - currently an Intel-exclusive technique that redefines the look of gaming when playing with v-sync disabled on a non-VRR display. It doesn't eliminate the 'wobbling' effect but what it does do is blend the two (or more) frames on-screen, making the harsh tearing more difficult for the human eye to detect. This is pretty cool stuff, especially as it's showing signs of innovation and new ideas from the next player in the graphics space. Far more is covered in this 55-minute discussion but for now at least, all eyes move on to our XeSS review, coming later this week. It's deep, it's uncompromising and it's the toughest workout we can muster for the new wave of upscaling techniques - we're really excited in sharing the results with you.