The Criterion Tech Interview: Part One
Meet the team behind Burnout's engine evolution.
Every Saturday, Digital Foundry takes over the Eurogamer homepage, offering up a mixture of technical retrospectives, performance analyses and new ways to showcase some of gaming's most classic titles. When the opportunity arose to meet up with the Criterion tech team, I jumped at the chance - the aim being to get the full story behind what is one of the most technologically advanced games of this generation: Burnout Paradise.
The full feature, complete with exclusive video of the brand new Big Surf Island DLC was significant enough, but it covered just part of the exhaustive conversation I had with the Criterion tech team. I came away with more... much more.
So here, completely unabridged, is the entire discussion. Or rather, the first part of it - covering the evolution of the ultra-low latency Burnout engine, the Renderware connection, code-sharing within EA, plus the open world technology unique to Burnout Paradise.
I'm Richard Parr, technical director at Criterion Games. I've been at Criterion since August 1999, I joined just as we started looking at the PS2. Criterion were one of the few, and possibly only companies outside of Sony to have PS2 hardware, which was one of the reasons I came here! I was lead programmer on Burnout 1 and did the same on Burnout 2, then moved into looking into both Burnout 3 and Black. I became technical director around the time that EA bought us.
I'm Alex Fry, senior engineer at Criterion Games. I joined in August 1998, and was involved in a fairly small tools and tech team before moving to the engine side of things, working on Dreamcast, then about a year later worked with Richard at looking at PS2. I worked as the lead... well, the only graphics engine coder on Burnout 1.
Yeah all of the above, the great thing about being with Criterion is that we all sat in the same building. The core engineers working on the graphics side of it and myself became good friends, talked a lot and Criterion Games became a very trusted party and worked really closely with those guys to make sure the right decisions were made. We were pretty instrumental in the evolution of the Renderware tech from version three when we started working on PS2.
The previous version was PC and Dreamcast.
Renderware was originally a software renderer for PC...
Yes, with the advent of PS2, Renderware became a very nicely layered rendering technology that we helped evolve.
We were at the cutting edge and we were encouraging the guys to keep their stuff customisable, then we'd customise it and feed it back to them about how it worked, and how we were finding it. Burnout 1 used a lot of the vanilla Renderware code just out of the box, with some optimisations.
Yes, it was always 60 frames. It was lacking a little in things like mip-mapping, but...
One of the decisions I always regret!
It made the game far too flickery, but by the time we realised that was the major cause, it was just too late. One of the questions we were always asked at developers' conferences was "Can we have your version of Renderware?" and I'd always reply with glee that "it's the same one you've got". We didn't write our own version of it. It was just out of the box, but we worked with the Renderware team to make sure we used it properly.
There are a number of licensees world-wide that still use it, but on previous generation hardware.
Once we got to the period of Burnout 3, Revenge and Black... the last PS2 games we did (the machine was epic!)... Renderware was still evolving, but we were looking at the next gen at that point. Internally we were working with EA Tech on a new version of it – you can call it that, but it's not got the same name – a new evolution of Renderware with the same ethos for use internally.
It's completely arbritary. Something simple like an XML reader, you can expect that to be used as-is. But for something heavily written for a particular game...
Ah yes...
There is a system internally in EA where we share knowledge. If we've got interesting, cool ways of doing things, we're very up for sharing that. There are a number of systems in place for sharing code... there are some things you can package up nicely, little nuggets of tech that are shared and we use some of that in our code. But the stuff that's critical to your game you'll want to understand enough to use it. That doesn't mean you need to write it, but you do need to invest a lot of time to study someone else's code to understand it enough to use it to the best of your ability. It's a balancing act – getting the most out of the fact that EA is a large organisation with a lot of very talented engineers, against having something that's suitable for a game you're writing at the time. There are one or two pieces of code used throughout EA, but they tend to be on the tools and infrastructure side.
Just because a piece of tech might be shared, that doesn't mean it's going to look the same between games. Both Skate and Burnout Paradise use the same level rendering engine.
Skate might be but Burnout isn't. It's double-buffered.
Minimal latency.
That's Burnout. On a CRT display. From the point of the input being read to the display being flipped on-screen, that's 50ms.
Very sensible. But as you probably know, most of the latency in gaming today comes from the processing going on in the LCD TV. They often have five, ten frames of latency.
Yes, we try to get the latency down to the lowest possible, because it's just a better experience. It's one of the reasons Burnout runs at 60FPS.
You don't necessarily have to take it into account, but the latency and the percentage of the frame it occupies...
Everyone's got their budgets... so if you're covering AI, we'll say "Right, you've got five, ten per cent, of a 60Hz frame"...
That's the key word. If you sustain it, you'll achieve it. If you leave it and try to claw it back at the end, you won't.
It's about parallelism, not necessarily threading...
It doesn't. It depends on how you do it actually. If you just say, "I have four processors, I'll have four threads" and I'll run something on one, something on the other, then depending on how you synchronise them, there can be massive latency... or no latency. You can do what I'd imagine a lot of teams do and run a bit of your game and then hand it off to the next thread while the next frame does some of the physics, the next some of the AI, then the next does the rendering. You do that and you're queuing up massive latency over many frames, and that's bad. But you can do the same amount of work differently; instead of deferring it a frame and doing it in one long run, you can break it into chunks and do it simultaneously using as many processors as possible. When you start and stop, because you've used all the processors, you can get the latency down that way while still maintaining a lot of performance. I think the best games will choose the best models... not just for the entire game, but for bits of each game. For example, maybe physics needs to be really low latency, but AI path-finding to a far part of the world can be slower and you won't notice.
There were a number of challenges – the key thing is just to take them all on, take a look at your game design and don't try anything that's going to be insurmountable. You have to run some numbers, work out whether it makes sense or not. Does it roughly add up or not? Streaming... that's independent of the rendering... you render what you stream... as long as you can stream it into memory and render it on time then you're OK. So we split streaming into one focus group, and rendering into another and approached it that way. I think streaming was one of the biggest hurdles...