We reconstruct a controllable model of a person from a large photo collection that captures his or her persona, i.e., physical appearance and behavior. The ability to operate on unstructured photo collections enables modeling a huge number of people, including celebrities and other well photographed people without requiring them to be scanned. Moreover, we show the ability to drive or puppeteer the captured person B using any other video of a different person A. In this scenario, B acts out the role of person A, but retains his/her own personality and character. Our system is based on a novel combination of 3D face reconstruction, tracking, alignment, and multi-texture modeling, applied to the puppeteering problem. We demonstrate convincing results on a large variety of celebrities derived from Internet imagery and video. Continue reading “Total Moving Face Reconstruction”
Doug Griffin demonstrates Faceshift expression tracking with a depth sensor made by PrimeSense. Apple quietly bought PrimeSense in 2013.
Prior to being acquired by Apple, Zurich-based real-time motion capture firm Faceshift worked with game and animation studios on technology designed to quickly and accurately capture facial expressions using 3D sensors, including Faceshift Studio software with plugins for Maya and Unity. The company was also working toward consumer-facing software like a Skype plugin that would support real-time avatars for video chat. Continue reading “Apple buys Faceshift”