Jon Ingold – Sparkling Dialogue: A Masterclass

Game conversations are all too often static and stagey: there’s none of the messy, fun and characterful back and forth that makes a conversation sparkle. Using Ink and assisted by Sally Beaumont, Jon Ingold (Heaven’s Vault, 80 Days) will create conversations that are dynamic, contextual and full of moments of connection.

A playable version of Jon’s Blade Runner scene is available here:
https://assets.inklestudios.com/file/…

And the source code is here:
https://assets.inklestudios.com/file/…

AdventureX is the only convention dedicated to narrative-driven games.

Unity’s “plan of intent” Roadmap for Animated Storytelling

According to Highlights from Unity’s Film and Animation Summit at Unite LA, Sr. Technical Product Manager for Film and TV, Mathieu Muller, shared the product roadmap and plan of intent for Unity when it comes to animated storytellingThis talk is suspiciously missing from YouTube.

However, many interesting features were teased during the general Product Roadmap, including video streaming and genlock coming in Unity 2019.

The full roadmap presentation is below.

Design Constraints in Narrative Exploration Games

Nels Anderson is a game designer with Campo Santo. Nels talks about some of the hurdles they had to over come in creating Firewatch. Firewatch is the story follows a Shoshone National Forest Fire look out named Henry in 1989, following the Yellowstone fires of 1988. A month after his first day at work, strange things begin happening to both him and his supervisor Delilah, which connects to a conspired mystery that happened years ago

Preliminary Poetics of Procedural Generation

“Procedural generation has a long history in games and art, but we currently lack an effective vocabulary for talking about what it means when we use different kinds of procedural generation.”

Isaac Karth’s talk covers views on how to discuss the different forms of Complexity, Form, Locus, and Variation in procedural generators. He also applies the aesthetics of Dionysian and Apollonian order.

Full post and transcript at Procedural Generation Tumbler.

Automated Staging for Virtual Cinematography

“While the topic of virtual cinematography has essentially focused on the problem of computing the best viewpoint in a virtual environment given a number of objects placed beforehand, the question of how to place the objects in the environment with relation to the camera (referred to as staging in the film industry) has received little attention. This paper first proposes a staging language for both characters and cameras that extends existing cinematography languages with multiple cameras and character staging. Second, the paper proposes techniques to operationalize and solve staging specifications given a 3D virtual environment. The novelty holds in the idea of exploring how to position the characters and the cameras simultaneously while maintaining a number of spatial relationships specific to cinematography. We demonstrate the relevance of our approach through a number of simple and complex examples.”

Associated paper: https://hal.inria.fr/hal-01883808v1

Unity CineCast live demo

CineCast allows you to direct the story.”

Adam Myhill and professional gamer Stephanie Harvey gave a sneakpeek of CineCast, an AI camera tool under development at Unity and French AI institute INRIA (Institut National de Recherche en Informatique et en Automatique)

After a confusing live demo of a multi-player assault game, Myhill and Harvey explained details of the new technology, which uses a story manager AI to selectively “live edit” various CineMachine cameras into a single visual feed – effectively an automated broadcast video-editing truck that “anticipates” the best angles.

Harvey: “It’s like having a film production crew inside the game, working for me to get the best shots. I went fully-automatic a couple of times, but I was also able to focus on specific shots that I wanted.”

Myhill: “We delay CineCast by three seconds, but don’t delay CineMachine, so the cameras can see into the future by 3 seconds, and we can cut and show things just before they happen.”