E3: Post-Natal Discussion
Microsoft's Alex Kipman explains the technology.
The sensor itself has a lot of magic built in. It wouldn't be interesting for us to go to our developers and say, 'Hey, you can create all these brand new, awesome experiences but you need to do a lot of processing outside of the game.'
So we have a custom chip that we put in the sensor itself. The chip we designed with Microsoft will be doing the majority of the processing for you, so as a game designer you can think about the sensor as a normal input device - something that's relatively free for you as a game designer.
Designers have 100 per cent of the resources of the console and this device is just another input device they can use. It's a fancy, cool, awesome device, but essentially you can just treat it from a free-to-platform perspective, because all of the magic - all of the processing - happens sensor-side.
Burnout is almost a bad example because it's an old game that wasn't designed for Natal. I would say Tomb Raider would need to be designed for Natal from the get-go. I've presented this to most of our third-party developers and from the creator's perspective you start thinking of brand new game mechanics, brand new ways of interacting with the game.
So the Tomb Raider team would come up with all of their different game mechanics and represent them with different Natal experiences. Lara does a lot of headstands, and I wouldn't expect people in their living room to be doing headstands. As a game designer you'd have to come up with a natural gesture to go mounting into headstands.
How can I make a user in their living rooms feel like Lara Croft without being as fit as Lara Croft? Because none of us are! This is the thing that really excites the game designers we've been talking to, both first-party as well as third-parties. They look at this as a brand new set of paint colours and paintbrushes they can use to paint brand new experiences.
Game designers will have to come up with what is natural. I can tell you several different options I can think of. You could say, hey, do this to accelerate [mimes pushing a steering wheel forwards] or this [pushes his shoulders forwards] or this to brake [pulls backwards].
And remember I'm tracking 48 joints individually, so there are so many combinations. I just gave you a few I thought of off the top of my head, but game designers could come up with anything. For all I care you could use your head to go forward - it's not very natural, but you could use any number of things as a game designer.
We're not making any predictions about the gestures; we think that's very constraining for game designers. We're saying, we're tracking 48 joints per frame in real time - use the combination of those things to create a rich vocabulary of gestures that allows you to create brand new experiences.
By the way, our system is able to understand these compound gestures in real-time, so you can really live up to this whole "all you need is life experience" idea. You teach the machine to understand the users as opposed to teaching the users to understand the machine.
You do that because there is no single gesture for any action - there will be several gestures for a single action, and as game designers you can manage all of these things and essentially graft them all onto your new experience as game mechanics. So you can have really simple, fun, "jump in" experiences.
Alex Kipman is project director of Project Natal.