Virtual Reality

How does Google Jump work? Seeing is believing!

Virtual Reality (VR) – something blows your mind when it comes to ‘reality’. Since Google foresaw the demand of virtual reality, they introduced Google Cardboard in 2014 Google I/O.  As every year’s I/O, Google instigated a smart leap in the ever-growing field of technology. In this year’s  I/O, it is Google Jump!

This blog post enables you to seek answers to what Google Jump is and how it makes ‘seeing is believing’ real.

So, what Jump enables? Jump enables, any creator to capture the world in VR video. The video that you can step in, experience it like you are actually there and of course, you can make it available to everyone.

How Does Jump work?

Jump includes three parts:

  • Camera Geometry: Capture the video.
  • Assembler: Turns the raw footage into VR video.
  • Player: Helps to view the VR video with your smartphone and cardboard.
Fig.1 Jump Overview

So far so good. Okay, let’s have a glance at those three parts.

1. Camera Geometry

Using a very specialized geometry, 16 camera modules are mounted in a circular fashion. What’s critical, is the actual geometry includes the size of the rig, number, and placement of cameras, the field of view (FOV), relative overlap of the cameras etc. Google has a plan to share the geometry with every enthusiast who loves ‘DIY’, as they did in the case of Google Cardboard. Else you can have the Jump ready 360-degree camera rig made by professionals from GoPro, with whom Google shook hands already. Anyway, you will have to wait until late June.

Fig.2 Camera rig made by GoPro
2. Assembler

“This is where the Google magic really begins,” says Clay Bavor, VP Product Management at Google. Raw camera data may have some 3D alignment glitches, even after performing some rough alignment, global color correction, and exposure compensation.

By knowing the information of the underlying structure of the current scene, the assembler fixes the 3D alignment glitches.

Fig.3 3D Alignment Hitch after Global Color Correction and exposure compensation by the camera rig

The 3D alignment works by knowing the depth of each object in the current scene and also allows to create all of the in-between interpolated viewpoints, which in fact gives a nice 3D effect to the scene.

Fig4. Camera rig captures thousands of viewpoints

Thus, in short, the assembler takes the 16 different video feeds and uses the combination of thousands of in-between viewpoints everywhere along the circumference, and these viewpoints synthesize the final ‘imagery, depth corrected, stereoscopic VR video’, with high resolution.

Fig.5 Depth corrected imagery

Yet another happy news from Google – they will be making this process of assembling power, broadly available to selected creators worldwide by this June.

3. Player

The one prominent question – Where can people watch these brilliantly crafted videos? Here is the answer – our video browsing companion for years: YouTube! By June, YouTube will support JUMP! So that you can make it available to everyone around the world.

For the time being, you can check out for basic non-stereoscopic 360 degrees content on YouTube.

Some interesting links: In short, all you need is a YouTube app, a smartphone compatible with Google cardboard, and of course cardboard for the visual treat. Near things look near, far things look far, you can look around, and feel like you are there!

  • Examples of a non-stereoscopic 360-degree content available on YouTube. Catch it with Google Cardboard.