Tech Interview: Halo: Reach
An exclusive, in-depth chat with Bungie on the making of Reach.
The idea behind temporal anti-aliasing is fairly simple: the stuff you're rendering in a given frame is very likely to be nearly the same as the previous frame, so why not leverage all that work you did drawing the previous frame to help improve the current frame? Our particular approach does half-pixel offsets in the projection matrix every other frame, and does a selective quincunx blend between the last two frames.
It is designed to turn off the frame blending per-pixel, based on the calculated screen-space motion. That is, if the pixel has not moved, we blend it, and if it has moved, we don't blend it. On the static parts of a scene, it's much more effective than standard 2x MSAA because we do gamma-correct blending, which looks much better than the blending implemented in hardware, and we are using the quincunx pattern as well.
The downside is that motion flips it off, and although aliasing is less noticeable when you are moving around, you can still see it. Another disadvantage is that it can't handle multiple layers of transparency, where some layers are stationary and others moving. So any transparent has to decide if it will overwrite the pixel's motion data or not, depending on how opaque it is. The huge advantage to temporal anti-aliasing is that it's nearly free - much cheaper than MSAA with tiling.
The ghosting artifact in the beta was caused by the first-person-view geometry (your arms and weapon) not properly calculating screen-space motion, so they were failing to switch off the frame blending when they moved. We just fixed that bug and it worked.
The AO replacing the shadow-map is just a happy coincidence, but we'll take advantage of it, intentional or not. The algorithm is actually a heavily modified and optimised form of HDAO, so it's naturally a screen-space effect: the ambient shadow is a constant size, in screen pixels, no matter how far away you are. This means objects that are far away appear to have large AO shadows, and the nearby ones have only a slight contact shadow near their feet. The artists preferred the look over constant world-size shadows, and it was also more efficient, so we killed two birds with one stone.
It's actually almost exactly the same algorithm as Halo 3, but the appearance was improved by several changes. When we calculate the pixel motion/blur direction, we were clamping it to a square in Halo 3, and now we clamp to a circle. Clamping to a square has the problem that fast motions always end up in the corners of the square, resulting in diagonal blurs that don't follow the actual direction of the motion. On top of that the improved per-pixel motion estimation for the temporal anti-aliasing helped give better results for the motion blur as well. Oh and the motion blur is no longer gamma-correct, which makes it less physically accurate, but also faster and more noticeable.
It's a pretty big topic, but in a nutshell, it basically calculates the waves in an offscreen texture as the super-position of many splash/wave particles. It uses the GPU tesselator to convert it into a mesh on screen, and runs a custom refraction/reflection/fog/foam shader to render it. For Reach we spent a lot of time optimising the heck out of it, so we could use it on a much grander scale. We sped up the shader several-fold, turning off things like refraction when you're far away, and stopped animation when you weren't looking at it. The visual improvements were mainly the result of more polish in setting up the shaders.
The single biggest factor was our new system to automatically generate a low-LOD version of every object and piece of level geometry in the game. This will actually be presented by Xi Wang at GDC. To give you a short summary, it builds a very efficient vertex-shaded version of each object and piece of level geometry. These LOD models render extremely fast, can be batched, and look nearly the same at distance. And because it was an automatic process we didn't have to take time from the artists. We also improved our visibility culling algorithms and made use of amortised GPU occlusion queries to reduce the amount of stuff we had to consider each frame.
Thanks! I'm going to be presenting a little of this in my GDC talk as well. We created a low-resolution transparent rendering solution to get around the fill-rate/overdraw bottleneck and render a lot more transparent layers. It doesn't use the 360's MSAA fill rate trick, so it costs a little more, but you don't get the crunchy edges or up-sampling artifacts. I also chopped about 70 per cent off the cost of our patchy fog system, which gave the artists free reign to use it anywhere and everywhere; I think the only area that doesn't use it is the last half of Long Night of Solace, when you're flying around in space.
Yes, the 1GB dev kits were pretty useful - they let us run debug versions of nearly full builds of the game, although the major beneficiaries were the artists and designers, who could load levels in editing mode but still see the high resolution textures of the final game.
And I believe you're talking about the back-buffer used by the 360's UI, which amounted to about 3 megabytes I think. When you launch a game it keeps the previous application's back-buffer around for one frame so you can do a fancy fade or transition if you want to. The original version of Halo 3 didn't free that memory, which meant you had 3 megabytes less memory available for streaming in high resolution textures. But one of the Halo 3 title updates fixed it, so now that memory is available for the game. The fix was in ODST and Reach from the start.