6DOF VR Video from VR180 Cameras

Practical 6DOF VR video on Quest 2 / WebVR

Lifecast Incorporated
5 min readFeb 5, 2022
A fisheye image and its depth map overlayed.

6DOF is part of the evolution of VR video. At Lifecast, we are making it possible to create 6DOF VR video with any VR180 camera. The videos can be watched in 6DOF on a Quest 2 using WebVR. In this post we explain the basic ideas and show some examples.

What is 6DOF?

6DOF means 6 degrees of freedom. Current-generation 3D VR video formats like VR180 and omnidirectional stereo (aka stereo 360) are 3DOF (3 degrees of freedom), which means the user can rotate their head, and their view responds, but if they move side to side, their view doesn’t respond. This can cause motion sickness because the user’s inner ear perceives motion, while their eyes don’t see the same motion. With 6DOF, the view responds correctly to all motion because it is actually rendering a proper 3D scene, not just re-projecting the original images. This makes 6DOF more immersive and comfortable. Furthermore, 3DOF formats produce incorrect stereoscopic views in VR if the user rolls their head, or looks up or down from the horizon, which can cause eye strain. 6DOF fixes these issues as well, but introduces new problems.

Creating and watching 6DOF is hard

Making 6DOF VR videos is harder than 3DOF because it requires something like depth map, photogrammetry, volumetric, or light-field reconstruction of each frame of video. There is a lot of research on this topic in academia and big tech, but few practical solutions are available to VR filmmakers and watchers. Some 6DOF experiments require expensive camera rigs (e.g., 20 or more cameras), and vast computing resources to process the video. Even after a 6DOF video is processed, it might only be watchable on a PC with a high-end GPU, while most of us have mobile VR headsets.

At Lifecast, we are making more practical tools for 6DOF VR video. For creators, we make software that can convert any VR180 video to 6DOF, so you can use the VR180 camera you already have. For viewers, we have a 6DOF video player that runs on Oculus Quest 2, in the Quest Browser, or any other VR enabled browser / device.

Balancing tradeoffs for VR video

What’s the best quality 3D we can deliver while still being able to run it on a Quest 2, and capture it with a reasonable camera? Is it possible to edit this new kind of video with existing tools? Do the images look sharp in VR, and can the video play over the internet without annoying buffering? We developed a new 6DOF VR video (and photo) format which optimizes for all of these considerations.

180 vs 360

We are focusing on 180 degree field of view, and making 6DOF with existing VR180 cameras. Going this route instead of 360 degree 6DOF is a tradeoff, but we think it’s a good one. We get better image quality in VR, faster processing, less buffering, and can use a wide variety of VR180 cameras. We are seeing particularly good results with footage from the new Cannon R5 camera with dual-fisheye lens.

Better occlusions in 6DOF with 2 layers

Some existing 6DOF content is similar to a 360 photo in equirectangular projection (a way of wrapping a rectangular image around a sphere), but with an added depth map. This is a step in the right direction, but this approach alone makes it difficult to avoid visual artifacts like streaky lines at the edge of objects in VR. We experimented with this for a while, and figured out how to make the streaky lines go away, but this revealed a new problem: in 6DOF the user can move to look behind things where the camera never saw (this is an “occlusion”). Filling in this missing data is hard, but people expect it as soon as they get the freedom of 6DOF. At first, we tried to fill in the occlusions in realtime in the video player (here’s an example of one of our early films which does this).

Recently we made a step forward in visual quality for occlusions with a new 180-degree 6DOF format with 2 layers, one for the foreground and one for the background. The background layer is pre-rendered and stored in the video rather than computing it in realtime in the player, which means we can use more advanced algorithms or compositing tricks to produce a better result, and it still runs on a Quest 2 in WebVR.

A synthetic 2-layer scene

Here’s an example of a synthetic rendered 6DOF scene: visit this link in the Quest 2 browser to see in VR. This is what the 2-layer encoding looks like:

The circles cover a 180 degree field of view. The top half stores the foreground layer, and the bottom half stores the background layer. The left side is the color, and the right side is the depth map. Notice that in the background layer, the 3 spheres have been hidden, but their shadows remain. We don’t need to hide the walls because there is no way to look behind them.

For this synthetic scene, we know the depth map perfectly, and we cheated to make the background layer by hiding the 3 spheres. But what about images from the real world?

2-layer 6DOF from real-world images

Our software uses neural nets and image processing to figure out the depth of the foreground and background, and paint in the holes in 3D. Here’s an example you can view in VR, shot on a Cannon R5: photo by Thomas Hübner. Notice that you can look behind the lantern chain. Behind the chain, you see the second layer to fill in the occlusion. Here’s what the 6DOF 2-layer representation looks like:

This software to convert VR180 to 6OF is available now at lifecastvr.com. We’re excited to see what you make!

Connect with the Lifecast community on Facebook

About Lifecast

Lifecast was founded in 2021 by Forrest Briggs and Mateusz Berezecki. Forrest has a Ph.D. in machine learning, and previously worked on 3D VR cameras at Facebook, and perception systems for self-driving cars and robots at Lyft and Google X. Mateusz was an engineer at Facebook for over 7 years and worked on the 3D VR camera team, where he went on-set with VR film crews.

--

--

Lifecast Incorporated

Forrest Briggs - CEO @ Lifecast Inc. Ph.D. in machine learning. Worked on 3D VR cameras at Facebook, self-driving cars at Lyft, and robots at Google X.