The Criterion Tech Interview: Part Two
The conclusion of the epic Digital Foundry vs Criterion technology interview.
In part one of the Criterion technology interview, Digital Foundry talked in-depth with two of the guiding minds behind the company's approach to the engine side of game-making, discussing the underlying Renderware technology, the team's proven approach to cross-platform development, the move to an open world in Burnout and the evolution of their technology through the use of DLC, with all the issues that entails.
In the concluding chapter of the interview, the move to the current generation consoles and the additional possibilities that opened up comes under the spotlight, plus we reminisce for a while on just how cool Black was, before tackling the thorny issue that everyone loves to argue about for no readily apparent reason - the "maxing out" of console hardware.
The focus was on the deformation of the cars during the crashes, but the basic driving experience is more realistic. We can just do more maths, so we can think about spending more time on things like the friction coefficients of tyres, that sort of thing. The cars do drive in a more physical way than they do in previous Burnouts... it's always been about what we wish driving was, rather than what it actually is. We've put a lot of effort in the drift... physically it's entirely wrong but actually, if you keep the accelerator floored it'll pull you into the bend. We spent a lot of time introducing "magic" into the physics.
And making sure that it doesn't break the real physics, the collisions that have to go on in the background, to stop you going through things. That is exponentially more difficult in an open world.
When we went open world for Paradise, we thought we'd create a new physical model for driving a car, but then we spent a lot of time trying to engineer back in all the stuff we thought made Burnout fun rather than "real".
With high definition graphics, you need high definition physics, otherwise you're going to see a lot of glitches. You just expect more. So we did spend a lot of time on that. High def physics in an open world with a lot of vertical gameplay. That took a lot of time.
We put a lot of effort into Revenge to make sure it played well. Visually it was interesting to see what we could do in high def; what the costs were, what we can do. Architecturally we didn't really change much with Revenge on 360. We did learn a little bit – you can't not.
We learned a reasonable amount about what we couldn't do with the existing tech. It was definitely valuable in that it convinced us that we had to make significant improvements, and that led to us starting with an essentially blank piece of paper when it came to Paradise.
The reason we started with a blank piece of paper was because even though the last gen engine scaled up pretty well to Revenge on the 360, we wanted to do a lot more than that.
Let's just say you have to be very sensible! Very pragmatic. It isn't magic, although perhaps we'd like to say it is.
Very early on we made a lot of very right decisions in terms of the architecture and the software and the way we were going to approach things which has worked extremely well for us and our aim was to produce an architecture that would work well on PS3, 360 and PC.
There wasn't, no, but the engine guys just couldn't help but notice that the PC was going equally parallel.
That's exactly true.
And that's what works for us. If you've got a PC engine for example, from a couple of years ago, before parallelisation was prevalent but there were a few threads around, then that will likely lead to the kind of approach Alex was talking about that leads to a lot of latency. It's like saying right, we'll have a thread that does a lot of AI, then another that does lot of physics and another that then renders it, then the GPU will have its go, then something will fall out the end. So that kind of algorithmic parallelisation where you've got different jobs on different threads just doesn't work particularly well, especially on PS3. So instead, let's say we'll do physics now, we'll use as many threads as possible, then the same with AI, then rendering – that's the approach we took. And that will scale nicely to whatever Intel has up its sleeve, like the Larrabee chip. It'll eventually scale up to the point where can say we'll have half of our threads doing physics, half doing AI if you can... it's kinda tricky because if you've got something like an animation system that needs to be informed by the physics, which needs to be informed by the AI, there's an unbreakable problem of well, something has to go first, and they can't all go on simultaneously because each feeds each other, so you'll need to come up with a compromise that just works with whatever you're doing.
And the same code.
Well, largely the same code.
The management code - the really high level code that manages how you parallelise what and where, that obviously can't be the same on both as you have different number of processors, different architectures, including PC, where you might have a Core 2 Duo, an i7... but the point is that the stuff the coders write, there is very little bespoke code, it's shared across all platforms, all processors. Obviously there's a little bit that is bespoke, like online. But the key is not to "go off on one" and do something suited to one particular platform that the others won't do very well.