Author Archives: wetcircuit

FFmpegOut by keijiro

FFmpegOut is a Unity plugin for offline rendering that records and exports rendered frames in Unity to a video file by using FFmpeg as a video encoder. An update has added ProRes4444 and lossless h264 varieties for perfect colorspace matching with video graphics.

 

The main scope of FFmpegOut is to reduce rendering time when using Unity for pre-rendering. It greatly reduces the amount of file I/O compared to exporting raw image sequences, so that it can be an effective solution when the bandwidth is the most significant bottleneck. On the other hand, FFmpegOut is not optimized for real-time capturing. It’s not strongly recommended to use it in an interactive application.

Encoding Presets

  • H.264 Default (MP4) – Highly optimized encoder with a moderate quality and a mid-level bit rate. Recommended for general use.
  • H.264 Lossless 420 (MP4) – Not actually lossless but the quality is high enough for most use cases. Recommended for pre-rendering use.
  • H.264 Lossless 444 (MP4) – The highest quality preset. Most software can’t decode videos encoded with this preset (e.g. Premiere crashes when importing them).
  • ProRes 422 (QuickTime) – ProRes is an intra-frame codec that is gradually phased out but still widely used in video editing. The ProRes codec used in FFmpeg is not aggressively optimized so that it tends to be slower than other codecs.
  • ProRes 4444 (QuickTime) – Only this preset supports alpha channel. Use this when you needs alpha channel for composition in editing software (Premiere, After Effects, etc.).
  • VP8 (WebM) – Very low bit rate encoding, optimized for web browser use.

https://github.com/keijiro/FFmpegOut

HMHS BRITANNIC SINKS – REAL TIME DOCUMENTARY

Animated in Unreal4, this real-time documentary of the sinking of the HRMS Britannic, sister of the Titanic, combines voice-over actors and historian commentary with game engine animation.

Faceware’s Facial Mocap for iClone

Realtime for iClone is a facial tracking application powered by Faceware Technologies that communicates solely with Reallusion’s iClone 7. Realtime for iClone helps Reallusion empower indie developers and professional studios with tools for real-time facial motion capture and recording. Fast, accurate, and markerless–all from a PC webcam.

Apparently this requires two purchases, in addition to iClone 7 and Character Creator 2:

  • iClone Facial Mocap Plug-in for Faceware ($599)
  • Faceware Realtime for iClone ($599)

Add one more purchase :3DXchange 7 Pipeline if you want to export the figures or animations out of the iClone universe.

No one ever claimed iClone or it’s attractive Character Creator is cheap, but un-surprisingly many software bundles can be purchased through Reallusion for a constantly fluctuating discount:
https://mocap.reallusion.com/iClone-faceware-mocap/default.html

ARKit maps your expressions to Blendshapes

Apple’s ARKit contains instructions for mapping facial expressions to blendshapes when using their face recognition technology on the iPhone X.

A Unity blog post ARKit Face Tracking on iPhone X states that Unity will be releasing a sample scene where ARKit is used to animate a 3D head, although that demo scene is not yet available.

I’ve compiled a list of ARKit’s blendshapes from the Apple Developer website:

Left Eye

  • eyeBlinkLeft
  • eyeLookDownLeft
  • eyeLookInLeft
  • eyeLookOutLeft
  • eyeLookUpLeft
  • eyeSquintLeft
  • eyeWideLeft

Right Eye

  • eyeBlinkRight
  • eyeLookDownRight
  • eyeLookInRight
  • eyeLookOutRight
  • eyeLookUpRight
  • eyeSquintRight
  • eyeWideRight

Mouth and Jaw

  • jawForward
  • jawLeft
  • jawRight
  • jawOpen
  • mouthClose
  • mouthFunnel
  • mouthPucker
  • mouthLeft
  • mouthRight
  • mouthSmileLeft
  • mouthSmileRight
  • mouthFrownLeft
  • mouthFrownRight
  • mouthDimpleLeft
  • mouthDimpleRight
  • mouthStretchLeft
  • mouthStretchRight
  • mouthRollLower
  • mouthRollUpper
  • mouthShrugLower
  • mouthShrugUpper
  • mouthPressLeft
  • mouthPressRight
  • mouthLowerDownLeft
  • mouthLowerDownRight
  • mouthUpperUpLeft
  • mouthUpperUpRight

Eyebrows, Cheeks, and Nose

  • browDownLeft
  • browDownRight
  • browInnerUp
  • browOuterUpLeft
  • browOuterUpRight
  • cheekPuff
  • cheekSquintLeft
  • cheekSquintRight
  • noseSneerLeft
  • noseSneerRight

Apple Developer: Creating Face-Based AR Experiences

Each key in this dictionary (an ARFaceAnchor.BlendShapeLocation constant) represents one of many specific facial features recognized by ARKit. The corresponding value for each key is a floating point number indicating the current position of that feature relative to its neutral configuration, ranging from 0.0 (neutral) to 1.0 (maximum movement).

You can use blend shape coefficients to animate a 2D or 3D character in ways that follow the user’s facial expressions. ARKit provides many blend shape coefficients, resulting in a detailed model of a facial expression; however, you can use as many or as few of the coefficients as you desire to create a visual effect. For example, you might animate a simple cartoon character using only the jawOpen, eyeBlinkLeft, and eyeBlinkRight coefficients. A professional 3D artist could create a detailed character model rigged for realistic animation using a larger set, or the entire set, of coefficients.

You can also use blend shape coefficients to record a specific facial expression and reuse it later. The ARFaceGeometryinit(blendShapes:) initializer creates a detailed 3D mesh from a dictionary equivalent to this property’s value; the serialized form of a blend shapes dictionary is more portable than that of the face mesh those coefficients describe.