FaceRig

Unknown-7

FaceRig enables anyone to live-puppet a fully 3D figure, or sprite-based 2.5D character. It uses your webcam to map your facial movements and speech to animate the avatar – with a growing list of additional modules supporting Intel® RealSense™ cameras, and Leap Motion™ (for hand control). Expressions and idle animations can be triggered by mouse or keyboard, and an app for mobile is on the way.

community_image_1410311605

FaceRig can render Quicktime video or an Image sequence, or broadcast a live puppet session like a webcam for Skype or streaming. A number of fun avatars are available. Custom props and rigged models can also be imported, as well as models from Live2D with an add-on. Renders are watermarked even for the Pro version.

FaceRig $14.99
webcam based tracking, fully featured for home non-commercial use.

FaceRig Pro Upgrade $64.99
can be used by people who make significant ad-based revenue off the place where they showcase their creations.

IRFaceRig Free
free for everyone and it works only with the Intel® RealSense™ SDK and the Intel® RealSense™ Camera on system with Intel® CPU’s.

FaceRig support for Leap Motion™ Controller Free
combine the Leap Motion™ Controller (for hand tracking) with another regular camera (for face tracking).

FaceRig Live2D Module $3.99
brings the amazing Live2D technology to FaceRig, enabling hand-drawn avatars that move and behave as if they were 3D while keeping all the aspects that make hand-drawn 2D avatars special. Use the seven models provided, or create your with the Live2D Cubism Editor:

Under Development:

FaceRig Studio, targeted at businesses, which will also enable numeric mo-cap tracking, destined to be use with professional software.

FaceRig Mobile for iOS and Android. – This version is currently in development.

FaceRig for Mac and/or Linux – is in the development plan it will most likely come to life after the release of the mobile version.

FaceRig website
FaceRig on Steam

Total Moving Face Reconstruction

We reconstruct a controllable model of a person from a large photo collection that captures his or her persona, i.e., physical appearance and behavior. The ability to operate on unstructured photo collections enables modeling a huge number of people, including celebrities and other well photographed people without requiring them to be scanned. Moreover, we show the ability to drive or puppeteer the captured person B using any other video of a different person A. In this scenario, B acts out the role of person A, but retains his/her own personality and character. Our system is based on a novel combination of 3D face reconstruction, tracking, alignment, and multi-texture modeling, applied to the puppeteering problem. We demonstrate convincing results on a large variety of celebrities derived from Internet imagery and video. Continue reading “Total Moving Face Reconstruction”