IDEA Workshop on Immersive Media

Part of
On The Lot 2019
Produced by the Advanced Imaging Society


Wednesday December 18, 2019

1:30pm  –  5:00pm
Google, Playa Vista, CA

IDEA is proud to be producing a special seminar on new developments in advanced imaging and the new Immersive Media Technology Format recently introduced by the Alliance.  The seminar is part of the annual two-day “On the Lot” conference by the Advance Imaging Society.

Our media experiences have been evolving from traditional flat images, through stereo 3D, 360-degree video, VR, mixed reality and beyond. In this seminar, the Immersive Digital Experiences Alliance – IDEA – will look into the near-term future of immersive media, including the introduction of light field imaging.  You will learn about the technology for capturing and displaying light field images, and  the new Immersive Technology Media Format (ITMF) designed to facilitate interoperation between immersive devices.

You can sign up just for Day 2 (December 18) to participate in the IDEA Seminar.  Or, you can enroll for the full two-day event for an exclusive preview of the technologies and creative opportunities shaping our businesses in 2020.

Register here: https://www.eventbrite.com/e/on-the-lot-2019-tickets-81780813615

PROGRAM  Wednesday December 18, 2019

1:30          Overview: Immersive Media & Interchange standards
– Pete Ludé,  Chairperson, IDEA

The concept of “immersive media” is deceptively simple: using digital technologies to replicate real-life experiences. But implementing immersive media for storytelling and entertainment requires a rich set of advanced technology tools, and the media interchange formats and workflow to go along with them.

2:00          Capturing Light Field images
–  Ryan Damm, Visby

Light field camera arrays make it possible to capture objects (including actors) and environments in a life-like fashion exceeding today’s volumetric imaging sys tems.  With true light-field, it’s possible to re-focus an image in post-production, or eliminate the focal plane entirely by displaying the image on a light field display.

2:30          The Media and Application Aware Network     
–  DJ Lal, PhD  and Dell Wolfensparger – Charter Communications

Future media delivered in new “3D native” formats over Head-mounted and holographic displays shall require resources from commercial networks that engage a blend of bandwidth (high speed), server location (low latency) and specific types of storage and compute resources based on an awareness of the media and an expectation of the Quality-of-Experience at the end user. This session will explain the motivation and benefits of this architecture. An exploratory interactive (near real-time) experience with a ray-traced cinematic render will also be shown.

3:00          Break

3:15          KEYNOTE: Welcome to Light Fields
– Paul Debevec, Senior Scientist, Google

Paul Debevec will discuss the technology and production processes behind “Welcome to Light Fields”, the first downloadable virtual reality experience based on light field capture techniques which allow the visual appearance of an explorable volume of space to be recorded and reprojected photorealistically in VR enabling full 6DOF head movement. The lightfields technique differs from conventional approaches such as 3D modelling and photogrammetry. Debevec will discuss the theory and application of the technique. Debevec will also discuss the Light Stage computational illumination and facial scanning systems which use geodesic spheres of inward-pointing LED lights as have been used to create digital actor effects in movies such as Avatar, Benjamin Button, and Gravity, and have recently been used to create photoreal digital actors based on real people in movies such as Furious 7, Blade Runner: 2049, and Ready Player One. The lighting reproduction process of light stages allows omnidirectional lighting environments captured from the real world to be accurately reproduced in a studio, and has recently been extended with multispectral capabilities to enable LED lighting to accurately mimic the color rendition properties of daylight, incandescent, and mixed lighting environments. They have also used their full-body light stage in conjunction with natural language processing and automultiscopic video projection to record and project interactive conversations with survivors of the World War II Holocaust. 

BIO
Paul is a Senior Staff Engineer at Google on the Daydream team, and Adjunct Research Professor of Computer Science in the Viterbi School of Engineering at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec’s computer graphics research has been recognized with ACM SIGGRAPH’s first Significant New Researcher Award in 2001 for “Creative and Innovative Work in the Field of Image-Based Modeling and Rendering”, a Scientific and Engineering Academy Award in 2010 for “the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures” with Tim Hawkins, John Monos, and Mark Sagar, and the SMPTE Progress Medal in 2017 in recognition of his achievements and ongoing work in pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in numerous Hollywood films. In 2014, he was profiled in The New Yorker magazine’s “Pixel Perfect: The Scientist Behind the Digital Cloning of Actors” article by Margaret Talbot. In 2019, Paul received a second Academy Award for Scientific and Technical Achievement for the invention of the Polarized Spherical Gradient Illumination facial appearance capture method, along with Xueming Yu, Wan-Chun Alex Ma, and Timothy Hawkins.

3:45          Introducing the Immersive Technology Media format ITMF
       – Arianne Hinds PhD, Architect, Otoy, Inc.

In October 2019, the Immersive Digital Experiences Alliance published its first public draft of the Immersive Technologies Media Format (ITMF). The ITMF consists of a suite of specifications that describe a new media format, and asset container to represent ray-traceable media for both existing and envisaged applications. Examples of these applications include: gaming, VR, AR, and emerging holographic displays. A unique aspect of ITMF is that it is display agnostic which means that it can be used as a basis for a production, mezzanine, and distribution formats for a variety of displays not limited to legacy 2D-based screen formats. This presentation will share the motivation for choosing the ITMF as a starting point for the work of IDEA, and elaborate on the roadmap for where IDEA will take ITMF in the near future.

4:15          Advances in Display Technology         
– Jon Karafin, Light Field Lab

New display technology is now able to reproduce an entire light field – that is, presenting light rays to the viewer in a manner that accurately reproduces real-life images (within defined bounds).  In this session, you’ll learn about how light field displays make this possible, and how they’re different from other volumetric and so-called holographic displays.

4:45          Wrap up

Book Your Pass

Save the Dates Dec 17 and 18 and Register Today! Seating is absolutely limited.

Through the generosity of our sponsors, registration is just $59.00 for one day or $99.00 for both days. More info will follow soon. But to guarantee your place, register today!

Register Here