Skip to main content

Tech Interview: Halo: Reach

An exclusive, in-depth chat with Bungie on the making of Reach.

Digital FoundryAnimation is hugely improved over Halo 3. We know that you have your own motion capture facilities in-house - how has this improved your workflow and what is the impact on animation quality in-game?
Richard Lico

Thank you, we're very proud of the results. The decision to embrace mocap ended up being one of the animation team's biggest wins. Our goal was to increase the realistic, natural fidelity of human locomotion, as well as speed up the content creation process. Both the cinematic and gameplay animation teams leveraged the system heavily, with great results. We also made a huge investment in our runtime animation systems which use the data.

Depending on the content requirements, and the individual animator's style of working, our approach to incorporating mocap was varied. The cinematic team would hire professional actors to spend all day doing multiple takes of a few specific scenes, and use the data fairly strictly, with limited performance edits. This resulted in the honed performances you see, with the robust attention to detail. The gameplay animation team, who had systems and design constraints to take into consideration, would often-times be our own mocap actors.

There were times when we would bring in professional actors, but found that the animators implementing the motion not only knew all the technical and design constraints, but were outstanding actors themselves. We would use the data in an animation layer, and alter the motion heavily to fit the content requirements. Or we would use it as 3D reference, potentially yanking the golden poses out of the mocap to help build the foundation of our keyframe animation.

Bungie's internal motion capture studio has helped to dramatically improve the animation over what we saw in Halo 3. It's not just body movement that gets a boost either. Facial animation is improved with a technique known as 'Faceover' where the actor's face is scanned, with the data then touched up by the artists before being added to the game.
Digital FoundryCan you tell us a bit more about the motion capture process?
Richard Lico

To make the most of our new mocap studio, we hired a full-time mocap technician to oversee the process from capture to delivering of the data to the animation staff. Once the data is applied to a proxy of the intended character, we can use a suite of mocap-specific rigging tools to transfer that data in any way we see fit to the final production character. Our tech art team built a new character rig that allowed us to work in more free-form, adjustable methods. Our animators are an eclectic bunch who all enjoy working differently.

Also, the intent of our content, and the elaborate ways in which some of our content was used in our animation systems, required very specific crafting methods. Having a flexible set of tools that would allow each animator to choose how to leverage the data was essential.

Lastly, not all of the new animation in Reach is based on mocap, and our end result is a culmination of many important advancements. There was a huge investment in the keyframe skillset of the animation team, as well as in the above-mentioned systems and tools that drive the content in game. Our team understands the need for solid characters, and has an uncanny ability to craft compelling performances. In the end, mocap was a tool, among many of our tools which we used to create the final, shipping character performances. It was a huge benefit, but only a portion of the grand improvements we've made to our overall approach to animation for Reach.

Digital FoundryDid your animation tech require an extensive upgrade to make the most of this rush of new data you had available?
Richard Lico

From a systems perspective, we practically re-wrote the book on how our animation content is being integrated. Previously, we did not have the ability to move our character's hips independently from the game pill, resulting in sufficiently static motion with unrealistic weight shifts. We now have a pedestal system that the entire character animates on, which resolves that issue. Previously, when overlaying poses to define aiming directions, we were forced to keep the underlying animation on the spine and hips very static and subdued, making our characters look stiff.

For Reach, we created a technique we called "space bones" which allowed us to use a combination of IK spine solvers and world-space rotations to make for much more detailed and natural underlying content with solid overlays. We separated our AI content from our player content with more physically accurate turning, bunkering and jumping systems. I could go on and on, but it's safe to say, we made a huge investment in the way our content is being incorporated.

The Elites were designed to be aggressive, harrying and pressuring the player, which had the side-benefit of allowing Bungie to up the danger-level without just throwing more Covenant at the player.
Digital FoundryAI has been another defining feature of the Halo games. What would you say are the defining characteristics of Bungie's approach to AI?
Chris Opdahl

Two things make AI for Halo stand out, in my mind. One is their remarkable ability to react in many situations based on how the player is choosing to interact with the encounter. Two is how we give the Mission Designers the ability to oversee where the AI can move to. We don't expect the Mission Designers to think for the characters, but we do want them to think for the player. Meaning, we want the Mission Designer to think like a player in how any player will choose to interact with the encounter, and then give the Mission Designer tools to allow them to influence how the AI will react to what the player does. Things like: if the player moves to the left and fights around these pieces of cover the AI should move into these positions, and if the player chooses to hang back and snipe the AI should fall back to find cover behind these buildings and force the player to move in closer to deal with the Covenant.

By letting the Sandbox (the actual AI of the character) manage the moment-to-moment of the AI and letting the Mission Designer think about how the AI should move through the spaces we end up with a nice balance between a carefully crafted gameplay moment and the chaos of a true sandbox experience. It also allows the Mission Designers to script around any situations we were not able to craft into the AI of the characters.

Max Dyckhoff

From an engineering standpoint, we are keen to create living entities that will fight and interact without any need for Mission Designers corralling them where they need to go. As Chris said, this means that the designers can just worry about directing the AI around the encounter from a very high level, and generally not have to worry about the second-to-second game play.

From a player's point of view, it means the AI responses are consistent across multiple levels. This means that the player can learn what to expect from an enemy type, and consequently learn how to beat them.

We are also careful to create as realistic representations as possible; we don't want the AI to be cheating when they are fighting you, because the player isn't stupid and will in fact feel cheated. Because of this we invest a good deal of time into a believable perception model, which has been around since Halo 2 but has received a good amount of love in the last couple of years. This principle stays true across all new systems we build for the AI, from movement to space flight, weapon use, etc.

AI is paramount to the core of Halo. If you're interested in learning more about our approach to AI, we've already published some of our examinations on the topic to Bungie.net.

Digital FoundryIn Reach we see the return of the Elites who look to have received plenty of attention from an AI perspective. What did you want to achieve with them, and how does the behaviour change between different Elite types and between different difficulty levels?
Chris Opdahl

The main player experience goal for the Elites was to make them scary. We wanted the player to always feel a little tinge of pressure when they saw or heard an Elite. We planned to accomplish this by making the Elite quick and aggressive so that the player always knew they were just several moments away from the player's position and, potentially, the player's death. We also made them want to chase after a retreating player more aggressively. If the Mission Designer gave the Elite the ability to move into the player's space then the Elites would often chase after the player pretty quickly when the player fell back into cover.

That aggressiveness has a nice bonus side effect that we also wanted to achieve, that a smaller number of Covenant were dangerous to the player. We found that when the Covenant were not aggressive, we needed to keep adding more and more enemies into a space in order to achieve the difficulty desired, but if you made them more aggressive they could put pressure on a retreating player and make the encounter more difficult without having to simply add more enemies.

This also helped with the feeling of the encounter challenges feeling "fair" and less cheap. If a player is chased by an Elite into his cover location and killed, the player is able to make pretty simple changes to their plan and tackle the encounter again with (hopefully) more success. That "rethinking the plan and then executing" loop for the player is a very powerful tool to create a sense of fairness, even in the face of extreme challenge.

Max Dyckhoff

One of the very early things we did for the AI was push some more of the decision-making processes out into data, so that they could be more readily modified and iterated upon by the character designer. A big part of this was the firing point evaluation code, which describes where a character would like to stand. Very quickly Chris Opdahl was making significant modifications to the Elite's movement, with beneficial effects.