One of the games I’m looking forward to the most but will ultimately not be able to play is L. A. Noire. (I lack a PS3 at this point, and the game won’t be coming out on PC. Not like my computer can handle it, but still, it would be nice.) What I’m looking forward the most from this title, however, is the technology behind it and where it will go in the future of not only game design, but in entertainment production in general.
For those of you out there who don’t know what the new technology is even called, RockStar has provided a behind-the-scenes video of it on their YouTube Channel. Here is their introduction to the technique known as MotionScan.
If you’re done drooling over the pretty graphics or freaking out at the fact that this game dives head-first into the Uncanny Valley, then let me present you with a theory of mine.
When this game comes out, everyone—and I mean everyone—will be talking about this new MotionScan technology to anyone who hasn’t heard about it yet. Depending on how RockStar plays the field, there may even be special segments about the technology on various morning show programs or technology news sources. The popular future for this technology that will be projected is the use in the film industry. Why have actors sit in a make-up chair and have a dot matrix applied to their face when that same make-up artist could just doll them up for a room with 32 cameras aimed at them at all angles?
There are two reasons why.
The first reason is that video games still have a stigmata against them right now when compared to movies. On the film side of the fence, video games still have a little ways left to go before they are seen as interactive stories in every sense of the word. They may be there with L.A. Noire, but we won’t know until the game comes out. Writing for a video game is also different from writing a film. As displayed in the video, RockStar went through a lot of trouble to include scripted fail states into their game that the player may or may not see. Valve did the same thing with Portal 2, where they recorded a fail state script that a lot of people won’t ever hear unless they stop what they are doing and turn around to hear it. (Oh yeah, SPOILER ALERT with that link.) On the video game side, this kind of interactivity in the story is central to what makes a game engaging and immersive to begin with. It makes an otherwise passive experience active. However, both film and video games are mediums for storytelling at its basic level, and it’s unfortunate that there are people on both side that fail to see this.
The second reason is the fact that there is a bias towards new technology in the film industry. Remember when Lucas said that he wanted the Star Wars prequels to be all digital in format because that’s what the future of film was going to be? These days, you’ll be hard press to find a theatre that doesn’t have at least one digital projector that offers a digital format of a film at some point in their operating season. Now you have James Cameron and Peter Jackson promoting the increased frames-per-second speed as the next step in presenting a richer film image. Cameron also gave birth to a real-time rendering system that was used in Avatar, which probably hasn’t been used since that film’s production. The potential applications for it are still present for this technology, but cost will always be an issue.
The same can be said with familiarity. Motion capture animation use has increased tremendously in the last several years. Production companies have become familiar with how it works, the benefits, and the limitations of the technology. It’s advanced to the point where all Andy Serkis has to do is have a make-up artist draw a grid on his face. The computer does most of the work, with a post-production team doing the clean up so it looks good for Peter Jackson. L.A. Noire is also using motion capture to animate and render the body performances of their actors. But I seriously doubt the Robert Zemeckis animation studio responsible for Mars Needs Moms is going to use MotionScan for their next mo-cap film. Why? Because as of right now, they are not familiar enough with the MotionScan technology to know how to use it. Only one product has used it right now, and it’s a video game. A video game that had to make the tech in order to produce the most immersive game play element yet.
That's not a mic. That's a camera... for his face.
Still, the technology is here, and its potential applications for it are being drawn out in the minds of creative individuals as I type this. It will be interesting to see where MotionScan goes, what it will be used for, and if the technology will be applied to other media formats like film or television. If it does go anywhere, that is.
The most likely scenario is that MotionScan doesn’t leave the realm of video games. With that industry becoming more and more like the film industry in how they produce their cinematic and stories, MotionScan is probably the closest thing to making an actor feel comfortable with the idea of a player controlling their character. Actors are already familiar with unusual camera set-ups and being filmed in unconventional ways for a movie. MotionScan offers the same familiarity while providing game developers the ability to turn the actor’s head slightly if their eye line doesn’t match up exactly. All without compromising the actor’s performance. Once automated lip sync has been applied (GMod, anyone?), then the localization process of some games that use MotionScan could be made easier. Disney used a similar technique with Kingdom Hearts to save them the cost of reanimation most of the mouths in spoken cut scenes. Who is to say RockStar isn’t doing the same thing at this very moment so that Cole Phelps’s mouth actually matches the Italian coming out of his mouth?
Eventually, there will be a production out there where someone will make the comment about how it would be easier to just film an actor already in make-up in front of a bank of cameras and download that performance onto a computer, how it will save them time in rendering photo-realistic alterations in the computer as well as processing power, and how it may actually be cheaper than paying Weta so-many millions of dollars just to create the character since there isn’t any major changes to the actor’s physical features other than making them look like they are 10 years old. But that won’t happen for another five years…