General Questions
What is IDEA?
IDEA is the Immersive Digital Experiences Alliance – a group of like-minded technology, infrastructure and creative innovators working to facilitate the development of an end-to-end ecosystem for the capture, distribution, and display of immersive media.
Why is IDEA being created?
IDEA was formed to create a suite of specifications and tools to facilitate the interchange of next generation immersive media. We believe such specifications need to be:
- Royalty-free and Open Source
- Built on established technologies already embraced by content creators for representing complex immersive images and environments
- Not constrained by legacy raster-based approaches
- Extensible – in order to allow continued improvements and advancements
- Based on real-life requirements and priorities across the entire ecosystem, including content creators, technology providers and network operators. No one group has all the answers.
How is IDEA’s vision different than other standards currently in use or under development?
Currently, there is no existing standard nor formal specification that addresses the wide variety of display formats available now and in the near future — from XR headsets to advanced light field panels. This is the gap that IDEA intends to fill with its media format. IDEA’s specifications will also provide for interface with proven content creation technologies, including 3D modelling tools. Further, IDEA’s specifications will support immersive media distribution over commercial networks, which currently is not supported by legacy standards or specifications.
Will IDEA be developing formal accredited standards?
No. We acknowledge and appreciate the important work done by accredited standards organizations, but the specifications developed by IDEA will not (at least initially) be accredited by formal standards bodies. Rather, IDEA will develop interoperable industry specifications that meet the immediate needs of the rapidly growing immersive ecosystem. Where appropriate, IDEA may contribute its specifications to formal standards organizations. IDEA will focus on:
- Gathering marketplace and technical requirements, which are required to define and support immersive media.
- Identifying where interchange, mezzanine, streaming, archive or other formats may be needed in support of immersive media applications, and drafting these specifications.
- Facilitate interoperability testing and demonstrations, in order to gain feedback from all segments of the ecosystem
- Produce educational events and materials to help the community understand the opportunities and challenges of next-generation immersive media, and exchange ideas
- Provide a unique forum to exchange information – where filmmakers can work with equipment manufacturers, network operators, technologists and visionaries to influence the best course for adoption of immersive media.
What is “Immersive Media”?
Ultimately, it’s the Holodeck. That is, a media experience that is not constrained by a fixed image size, restricted to one viewing position, or encumbered by distracting limitations in image fidelity.
Is Immersive Media created only by computer generated graphics?
Not at all. IDEA’s plans anticipate wide adoption of photographically-captured images, objects and environments. These real-life photographic captures may be stand-alone, such as for a sporting event or concert, or may be seamlessly combined with certain computer generated image elements.
How are immersive scenes photographically captured?
Light field cameras or camera arrays can be used for live-action immersive cinematography. Well-established photogrammetry techniques can be used to capture still images, such as environments, rooms or background plates. And light stages can be used to capture a volumetric image file of a person or object. IDEA will be working with technologists and content creators to identify the tools and workflows appropriate for each.
This sounds futuristic. Does technology exist today that supports this type of Immersive Media?
Almost. Virtual and Augmented reality head-mounted displays provide some of the features, but are limited. New multi-focal displays can provide much more realistic stereoscopic (3D) imaging by allowing your eyes to focus near or far. And new light-field displays—also known as holographic displays — are now being developed in research labs around the globe. IDEA intends to complete its work on immersive media standards in time for the commercial launch of advanced displays, networks and renderers that will enable the user experience.
Technical Questions
So is IDEA developing image format specifications for these Light Field Displays?
Yes — the IDEA format specifications are intended to support light field displays, as the “highest common denominator” of the immersive experience. But we recognize that such displays are a few years away from hitting the market. Therefore the specifications will also provide near-term benefits on today’s display technology, including VR headsets and stereoscopic displays. Our key principle is that the IDEA interchange formats are display agnostic.
What do you mean by Display Agnostic?
The ITMF (Immersive Technology Media Format) and future IDEA specifications will represent the image as a full three-dimensional environment, including complex geometry, textures, multiple focal points and viewpoint dependent lighting and texture. This is all necessary for support of light field displays. But a subset of this data will provide the best possible experience on a VR headset (including six degrees of freedom), and other displays — such as AR, stereoscopic and even traditional televisions and mobile devices. The ITMF stream will be rendered via a “smart network” to a format appropriate given device.
Display agnostic also means that the format is open to all manufacturers. The same ITMF format is intended to support displays using different technologies, and from different manufacturers, in order to avoid incompatibility issues.
What is a light field display?
The light field describes the amount of energy, in visible wavelengths, traveling through every point and location in space. We see things in the world not because they exist, but because of the object’s surface properties (e.g., shiny or matte finish) and how light reflects off of the object and reaches your eyes. Surfaces reflect in countless directions, not just the one you currently see— this is why when it is light, you can see around objects and things like transparency or reflection, and why when it is dark, you can’t see any objects, but they are still there. A Light Field Video Display (LFVD) projects objects that are undifferentiated from the real things.
A true LFVD must provide sufficient density to stimulate the eye into perceiving a real-world scene, providing for:
- Binocular disparity without external accessories, head-mounted eyewear, or other peripherals;
- Accurate motion parallax, occlusion and opacity throughout a determined viewing volume simultaneously for any number of viewers;
- Visual focus through synchronous vergence, accommodation and miosis of the eye for all perceived rays of light; and
- Converging energy wave propagation of sufficient density and resolution to exceed the visual acuity of the eye.
What is the difference between a light field and a volumetric display?
A Light Field display converges radial bundles of light in free-space from the display surface to create a holographic object. The converged bundles must incorporate multiple angular (q, j) color and intensity values for each single surface coordinate or feature. This allows for the projection of scene information that typically changes depending on the viewing angle and location, just like the real world. This (q, j) independence provides for the things that make light fields truly life-like, including reflections, refractions, etc. This is the core of a light field and the element that allows it to define how rays of light travel through space.
A volumetric display is similar to a traditional 2D display, but has the ability to show 2D pixels within a volume similar in concept to a point cloud. There are many ways these effects are achieved including time sequential volumetric slices, laser ionization, or high-speed rotating elements. However, none of these exhibit any holographic attributes, are limited to a small volume, are often dangerous to operate, are transparent, limit resolution and refresh rates, and cannot handle occlusion.
Synthetic 3D and volumetric images are now commonplace in computer games, movies and many other applications. How is what you’re doing different?
Most applications supporting 3D views are not holographic and require a specific display (e.g., stereoscopic or VR/AR) that rely on a 2D right eye / left eye view to create the illusion of depth. Sometimes these displays are augmented with eye tracking or movement tracking in hopes of presenting images appropriate to the viewing angle for the application. However, the limited resolutions, narrow field of view, poor optical quality, lack of opacity handling (for AR), problematic motion latency, and the inability for the eye to truly focus freely about the volume.
A light field creates the full spray of light within the viewing volume that allows the eye to focus on the objects presented. No gear is required. With a light field display you can see behind objects when you move your head. Parallax is maintained, reflections and refractions behave correctly, and the brain concludes that objects are “real.” To achieve this holographic realism, each pixel must contain scene information that changes depending on the viewing angle and location— just like the real world.
What are the specifications that will IDEA be working on?
We plan to start with an interchange format that will enable high-quality conveyance of complex image scenes to an immersive display, including six degrees-of-freedom (6DoF) for viewing. This will be called the Immersive Technology Media Format (ITMF). ITMF will be based on a scene graph, which is a well-established data structure used in advanced computer animation and computer games. Other specifications may include network streaming for ITMF and transcoding ITMF for specific displays, archiving and/or applications.
Will these Immersive Experiences be distributed to the home? How?
Lower quality experiences such as 360 video are already being delivered to the home from the network. However, in order to stream fully immersive experiences, we envision that the network itself can play a role in processing and computation. With the availability of light-field displays in the future, the network can be prepared deliver delightful immersive experiences using a multitude of displays.
IDEA will not only develop specifications for the Immersive Technologies Media Format, but it will also develop specifications to distribute ITMF over commercial networks, leveraging state-of-the-art IP networks (e.g. Cable 10G, WiFi 6, and 5G) for their speed, low-latency, and in-network compute functionalities.
What is a Scene Graph, and why is it helpful?
Scene graphs are used to structure a collection of nodes in a hierarchical “tree”. The node data can be based by vector-based graphics, point clouds, voxel maps or many other input sources. Live photographic images can be mapped into a scene graph as well, using techniques originally drawn from photogrammetry and lumigraphs. Yes – we know this is rather complicated sounding, which is why IDEA will be producing educational seminars in order to provide background and information about these established techniques, and how the ITMF format can leverage current practices.
How is audio being handled?
Sound is an extremely important part of the immersive experience. IDEA will leverage existing standards to represent a multi-dimensional soundfield associated with the image space, using technologies such as high-order ambisonics. This will be accompanied by object-based audio files with associated metadata, which will allow the positioning of selected sounds in relation with the viewing position. The ITMF framework includes the necessary position data to render this combination of ambisonic and object audio files to reinforce the immersive experience.
How will IDEA promote established technologies already embraced by content creators?
The ITMF standard is based on the ORBX format, launched five years ago by Otoy, and now supported by in over a dozen software systems used in 3D animation, and has already been used in many productions, including major Hollywood films. Starting with the ORBX Scene Graph format, IDEA will provide extensions to expand the capabilities of ORBX for light field photographic camera arrays, live events and other applications. IDEA will preserve backwards-compatibility on the existing ORBX format.
How will these requirements specifications be created?
IDEA is off to a good start with the contributions from Otoy on the ORBX format, and inputs from other members. IDEA is forming Working Groups to focus on specific deliverables. The Working Groups will consist of all IDEA members with expertise and interest in the particular area.