It's been a long time since we've heard anything about the new performance capture tech developed and implemented on L.A Noire by Sydney developer, Team Bondi. We first got a small hint of the tech while posting a job advertisement in late 2009 for Depth Analysis, but a recent press release from the Team Bondi related company reveals a lot more on the ground breaking capture technology.
(Press Release)
LOS ANGELES & SYDNEY - (Business Wire) Depth Analysis, a Sydney-based technology company, announces today the development of MotionScan – a revolutionary new system poised to forever change the face of high definition 3D motion-capture and actor performances in the Video Game and Motion Picture industries.
MotionScan is the groundbreaking 3D motion-capture system powering Rockstar Games’ uniquely styled detective game ‘L.A. NOIRE’, developed by Team Bondi. MotionScan not only streamlines post-production processing time and therefore budgets as compared to traditional methods, but also promises a radical impact on the quality of performances they expect to be able to deliver in their game and movie franchises.
“Traditional motion capture could never bring to life the subtle nuances of the chaotic criminal underworld of ‘L.A. Noire’ in the same way as MotionScan,” said Brendan McNamara, Founder & Director of Team Bondi. “MotionScan allows me to immerse audiences in the most minute details of L.A. Noire’s interactive experience, where the emotional performances of the actors allow the story to unfold in a brand new way. Through this revolutionary technology, we’re able to deliver audiences a truly unique and revolutionary game.”
Oliver Bao, Head of Research at Depth Analysis said, “2010 continues the trend of high production values in both triple-A video games and blockbuster movie releases. Audiences now expect detailed CGI actors and realistic performances that pop onscreen with any game or VFX movie they see, and we developed MotionScan technology with this in mind. We have focused on making high fidelity performance capture of actors for games and films affordable, accessible and easy. The end results are cinematic, interactive, and engaging performances like you’ve never seen before.”
Unlike anything currently in the market, MotionScan uses 32 High Definition cameras to capture true-to-life three-dimensional performances at up to 30 frames per second. Capable of capturing up to 50 minutes of final footage and processing up to 20 minutes of facial animation automatically per day, the technology revolutionizes traditional motion-capture and post-production animation. MotionScan records every emotional detail, mannerism, and facial nuance accurately frame by frame as 3D models. No markers or phosphorescent paint needs to be applied to the actors at the time of recording, and no manpower is required to clean up data and animate the finer details by hand after the shoot. For directors and cinematographers, an additional advantage of MotionScan is the ability to view an actor’s performance from any angle and re-light in any way from one take without the need for multiple camera and lighting setups that quickly drain production time and budgets. Depth Analysis hopes to redefine audience expectations of what is possible in a video game, at the cinema and at home by opening up new possibilities to established and future film and game studios.
body scanner eh?
So its a setup with lots of cameras with depth analysis creating a 3d scanner recording motion. Each frame is a model so the markers are probably created using the 3d scans. This would mean the performance only needs to be recorded once; it can be interpreted differently at will in 'post'.
Sounds like it would require some beefy rigs to work with though, 50 minutes scanned geometry at a usable resolution is pretty huge. Still, in the long run it is a better place for bottlenecks with tech improving over time. 20 minutes of facial animation is maybe just higher resolution?
Maybe the markers are generated by processing the scans, if someone selects a pile of verts and defines it as a head, then a program could possibly run through all the footage and just record the position and rotation over the time. Would mean you just get a moving point afterwards, but might be tricky to interpret the geometry like that.
Just thinking out loud, feel free to add or correct.