Google Expeditions.

While on a long term contract at educational publisher HMH, one major project was for Google Expeditions, a stereoscopic 360 VR tool to provide immersive panoramas and VR tours for schools and students and our project centered on annotated stills of medieval life and Vikings.

The crew chipped off to some downright epic locations, such as Swords Castle, Glendalough and Kerry County Museum for the shoots.  The camera they used was a Google Jump, a rig consisting of 16 Go-Pros which are mashed together on a quantum level and then supply you with 8196 x 8196 stereo 360 images.  The main technical limit on the cameras is that the top and bottom of the image is a blurry strip as the camera has no lenses on the top or bottom.  It’s effectively a cylinder, not a sphere.

One other thing, and it’s an easy mistake to make – this is stereo 360 footage.  VR is where the environment is built using tools like Unity or Unreal engine, hence the name – virtual reality.

This is the 360 version…

What it is:

So then, Google Expeditions is a learning tool for teachers, schools and students with over 800 VR and AR tours and experiences.  You need something like a Vive or Oculus Rift to get the richest experience, although Samsung Gear, or even a Google Cardboard viewer will do the job.

Strap on your goggles and you’re good to go.  All this stuff was shot and rendered in stereo 3D, which meant double the fun/pain when it comes to compositing and grading.  I had done just the one stereo 3D job back in 2011 and it was stressful at best.  But this was different.  Tech has vastly improved since, but the process for all this stereo 3D stuff is still a pain in the ass as that itself is still totally new.

We had several shoots across the country, in utterly intrepid locations like Swords Castle, Glendalough and a studio/warehouse in Kerry, with a condescension of actors and a troop of medieval/Viking re-enactors.  We dotted these guys around the scenes and using the Jump camera, shot some jaw-dropping photos and then had some software mash all the clips together into an up-down stereo 3D 360 shot.  They all usually look pretty ludicrous when flattened out for PS or AE, but viewed through an Oculus Rift or similar, it’s something totally different.

What I did:

A large amount of PS compositing of stills to build crowds from 3-4 actors, masking and painting out items in such a manner as not to cause problems with the stereo 3D images, composite Element 3D models into the scenes, add mist, fog, replacement skies, cows, smoke, fire, general death, all that sort of thing.

Project workflow:

The latest version of AE now has 360 plug-ins as standard, which goes a long way to getting the basics done. Usually, you’ll get the stills in as .EXRs, which are massive and resource intensive. Export them from PS as .PNGs and you’ll be in a better place.   If you don’t have a beast of a workstation, like the Gangster of Love, you might want to use proxies for your footage and stills.

For some reason, I ran into a ton of problems with the .mp4 footage from the sieges, no idea why though.

This is the workflow I used and that’s what worked for me with my high-end workstation, the Gangster of Love, fully equipped with a Titan X GPU, 64Gb of RAM and dual 3.1Ghz Xeons.  And even with all that, holy shiiiiiiiiiiit, I could’ve done with even more power under the hood.  The main thing to try to have (big fuck off GPU aside, you need a minimum of GTX 980Ti) is a large cache drive.  I ended up changing mine halfway through from a 120Gb SSD to a 500Gb SSD and this gave a huge difference in the speed of AE’s performance.

1.  Prep your footage/stills.

First of all, I used PS to mask out whatever portion of the stills needed it, starting on the top/left eye, using paths as they’re editable and salvageable.  Then alt+drag the path down over the bottom/right eye and adjust as needed.  Spend time and take care to get this right here as it will save a lot of trauma in the long run.  Clone-stamping out elements needs to be done in a similar manner, usually using the copied path and a clone from the right eye view for the right eye, instead of C+V’ing the whole left eye patches, as this looks odd in stereo 3D (not that you can tell at this stage).

Take the patched-up stereo 3D image (top and bottom) into a new comp, half the height of the image.
Centre it up on the top part so all you can see is the top image and export.
Repeat for the bottom part.
Make sure to name them something like Image_01, Image_02 so when you re-import them, AE sees them as a 2 frame image sequence.

2.  Create a fake camera move

New comp with your 2 frame image sequence.
Make this 3 seconds long.
Time remap the layer and drag out the second keyframe to the end of the comp.
Turn on frame blending for the layer (hit the switch TWICE) and for the comp.
Now you have a 3 sec sequence with a fake, but legit and useable, camera move going from left to right.
When you track this camera move, it gives you the camera info you need for the left and right eye.

For now, keep the playing head at frame 0 – this is the left eye view. Because we made a fake camera move between the 2 eye views, the final frame is the right eye view, so keep the playing head at 0 and do EVERYTHING on the left eye at frame 0.

3.  Convert your comp into a VR comp:

Open the VR comp editor window.
Hit Add 3D edit – Select your comp, set Comp Width to whatever it is (usually 8192), ratio 1:1.
AE now runs some scripts and builds a folder full of new comps. Don’t touch them. Just don’t. No need whatsoever. The only 3 that will concern you are the VR Edit comp (put Element 3D and other 3D elements here), the Precombined comp (colour grading and particles) and the VR Final Output comp (final export – yep, really….)

4.  Track the camera:

With that VR window still open, hit properties and apply 3d camera tracker. Let it do its thing – this is processor dependent, so the more Ghz you’ve got, the better.
Tracking points appear when it’s done, so select a few and with the 3d camera tracker effect selected, hit Create Camera.
Then in the VR properties window, hit track scene and the new tracked camera links to the VR camera. Progress!

 

5.  Edit VR // VR Precomp Combined // VR Final Output

This VR Edit comp is where you put Element 3D stuff.
You MUST use the buttons on the VR comp editor panel to jump back and forth from edit comp to final comp. It uses scripts and heavy expressions to update the final output comp.
Grading and fire, smoke, masking, etc. goes in the second last compostion, which is the VR Precomp Combined comp. Nothing to stop you using the VR Edit comp for grading, etc, but this seemed to work better for me.
Keep checking your work with the Oculus, it seems to brighten everything up a lot.

?

Profit.

Now render out a still when you’re happy with everything. This will be your left eye’s image.

IN THEORY THIS WORKS –

6.  Getting your Element 3D models, smoke, fire, etc. to work in the right eye view

Open a new project and import your left eye image and the old project.

New comp – this will be the stereo 3D comp – so it’s 8192 x 8192.
Drag in your rendered still and put that on top and then put the VR final output comp at the bottom – you’ll have 2 identical images.
To make it stereo 3d, you’ll need to move the solved camera a bit to the left using expressions
Add Null Layer, add a Slider Control effect, rename it distance or something.
Drag the timeline to a different part of the AE pallette so you can see 2 timelines at the same time.
Make sure your VR Edit comp and new Stereo 3D comp timelines are visible.
Separate XYZ dimensions on the camera’s position.
Delete the Y&Z keyframes.
Alt click on the X value of the camera – it changes to transform.xPosition – type + and link it to your slider control.
Changing the slider moves the camera left and right and you can see it in real time in your new final 3D export comp.

HOWEVER

Sometimes you won’t need to do this – if you’ve made the scene properly, the 3d elements generally match correctly. Not always, but sometimes.

7.  Exporting / Rendering / Basic Troubleshooting.

Render it all out, top and bottom.

Keep checking with your headset to make sure everything overlaps correctly – having a Rift or Vive connected to your machine via HDMI will make this a lot faster and easier to finalise, compared to something like Cardboard or Gear VR where you need to export the image and get it into your phone.

When you get blurs in things like grass or sand, it usually means you’ve reused the same pattern or clone brush or mask on the left and right eyes, when in fact they need to be slightly different – that’s where it becomes a huge, glorious clusterfuck.  No doubt, I missed out on some tiny detail which would’ve solved a ton of pain.

And this is a flat/2d version for the portfolio…