Virtual reality makes it possible for you to explore new worlds, faraway places, famous museums, or even be present in the front row of a concert or sporting event in another country. But achieving high quality, stereoscopic 360 video capture for virtual reality has required custom complex, high-end camera systems and hours of manual work in post-production.
In 2015, we introduced Jump, Google’s platform for VR video capture to empower a wide range of creators to create great VR videos. Today, we’re publishing «Jump: Virtual Reality Video», a research paper (with many authors!) that shares what we’ve learned. We’ll present it at SIGGRAPH Asia in December as well.
With Jump, we built an omnidirectional stereo (ODS) video system. ODS provides a seamless projection that is both panoramic (360) and stereoscopic (3D), allowing the viewer to look in any direction. ODS can be stored in the same format as traditional video, making it ideal for post-production, streaming, and playback on mobile devices. Although the ODS projection model has been around for some time, producing VR video using ODS presents a number of challenges.
First, ODS was not originally designed for VR. Playback in a head-mounted display (HMD) introduces distortions, which can make it hard for our brains to fuse the images seen in the left and right eyes. We carefully analyzed these distortions to determine practical limits on distance and viewing angle and to ensure comfortable playback in HMDs.
There was also no practical system for producing ODS video. To develop the Jump camera rig, we analyzed design space parameters such as the number of cameras needed, field of view, and rig sizes to create a “sweet spot” design that can be built with off-the-shelf cameras (it’s a sweet 16). These efforts are the basis for GoPro Odyssey, the first Jump rig.
Lastly, producing seamless stitches from multiple cameras is very challenging. We developed an algorithm that automatically stitches seamless high-quality ODS video by performing view interpolation based on a new approach for temporally coherent optical flow. This algorithm is the core of Jump Assembler, which has processed millions of frames of professionally produced VR video.
Here are a few animations that show how Jump Assembler works:
At the core of our view interpolation algorithm is a new temporally coherent optical flow algorithm. Optical flow computes how the images on the left transform into the images on the right, allowing us to produce any viewpoint in between.
For more information on Jump, visit https://vr.google.com/jump/. To read our SIGGRAPH Asia 2016 technical paper about the development of Jump, visit https://research.google.com/pubs/pub45617.html