The BasementThe Website is Down Adventure Game starts off with you locked in your parents' basement. For the sake of accuracy and for my own personal entertainment I decided to use my parents’ basement as the model. However, shockingly, "Weinberg family basement" turns up nothing on Turbo Squid! Neither does "fake stained-glass plastic candelabra" or "busted olive green fondue set circa 1975". Modeling out this stuff by hand would take a decade if I was good at modelling and I suck at modelling! So, I thought, "what if I just did the most obvious thing possible and showed the player a picture of the real basement from wherever they go?"
My parents’ basement.
In a nutshellThe idea was to capture a 360° panoramic image from every point in the basement and then display that image when the player is in the corresponding spot in the virtual world. Basically it’s like what would happen if Google Street View took a picture every quarter of an inch instead of every fifteen feet. As the player moves around they see a constantly updated view of what the room actually looks like from wherever they are and in whatever direction they are looking. Note that I’m not extracting texture data and mapping it onto 3D objects. The visuals are literally photographs of the actual location at a high enough density to allow smooth navigation.
Practically speaking there are of course far too many locations in the room to literally take a picture from each one. So I first restricted the player's movement to a set of line segments along a carefully chosen path. This movement restriction is justified by filling the extra space with static "stuff": boxes and general basement junk which you would not be able to walk through anyway. I then used a 2 inch interval for camera positions along each of the line segments. This gave enough images to allow for a decent frame interpolation (increasing the effective resolution to ¼” see below) without being too time consuming to record. Principal photography of the 187 camera positions took about ten hours in total.
I ran the resulting image sequence through a post-processing workflow to extract the 3D camera path, stabilize and interpolate the images and generate placeholder objects for collisions and shadows.
Here is a brief video showing the various parts of the workflow which i describe in detail below.
Details!Capturing the PanoramasEach panoramic photo in this project is composed of 4 individual full-frame spherical images taken at 90° angles to each other and then stitched together into an equirectangular projection. You can see the spherical images at the top and the corresponding equirectangular projection on the bottom of the following image. To record the images I used a Nikon D5500 DSLR with a full-frame fisheye lens and a Nodal Ninja panoramic tripod head. The "pano-head" allows for accurate rotations to be made around the lens' nodal point which is important to avoid parallax distortion in the output.
Fisheye to Equirectangular
I considered using one of the all-in-one panoramic cameras on the market but decided against it for a number of reasons. Compared to a similarly priced DSLR they have lower resolution and exposure range. The affordable ones also lack some critical features like aperture lock and RAW mode recording. Plus a full 360° field of view means that there is nowhere for the operator to hide! I would have had to leave the room for every exposure or take multiple shots anyway which just negates the point of having an all-in-one solution. With the pano-head I was able to generally just move around behind the camera for each shot which was not a bad workflow.
Since I was essentially shooting a stop-motion animation I had to deal with the changes in lighting caused by the movement of the sun. I addressed this by spreading the shooting out over a number of days, shooting for a few hours at the same time each day. I also used a window box covered with diffusion gels outside the window to even out the light and avoid direct sunlight shafts.
In order to reduce the footprint of the tripod and minimize the amount of post-processing needed to remove it I used a custom built wooden spreader. This brought the legs of the tripod into a narrow 12 inch radius. It also made it possible to simply push the tripod along the floor without picking it up and resetting it every time which would have been time consuming and error-prone.
Here look at some stuff.
Pano Head with Nikon | Ultraviolet Positioning Marks ( ) |
Find a handy friend to make you a triangle | Tripod on custom spreader |
The most difficult part of the initial photography was getting an accurate position and orientation of the camera for each panorama. For the effect to work the images need to be taken from very evenly spaced positions along the paths and the camera orientation needs to stay consistent. The placement issue was solved by drawing out a path on the floor using ultraviolet ink which is invisible on camera but which can be illuminated with a blacklight for positioning. I marked off two inch intervals along the paths and was able to position the tripod very precisely using those marks.
The camera orientation was a more difficult problem. I attempted to use both a compass and a laser pointer attached to the body of the camera to keep it aligned to a consistent direction. This gave a rough orientation at best since even a small rotation of the tripod was very obvious when the images were viewed in succession. I eventually solved this problem very effectively using the stabilization feature of my match-moving software which I will discuss below.
Once the images were captured I stitched them together using
PTGuiPro to generate 187 4K (4096x2048) PNG images. These images were brought into Premiere as an image sequence for a rough alignment to prep them for the following steps.
InterpolationWith a 2" separation between images there is a detectable stutter to the movement when played back. It feels like an old-timey movie. To remedy this I processed the sequence with an optical flow plugin for After Effects called
Twixtor. Designed to generate slow-motion from full frame rate video it did an excellent job of creating new frames in between my original images.
Color GradingThe next post-production step was to take the interpolated images and apply a color grade in
DaVinci Resolve. This allowed me to give the scene a more dramatic film-like look. The basic workflow is to find a white point and black point in the scene, do a color balance and then apply color modifications. I also was able to smooth out some of the erratic lighting caused by the stop-motion nature of the image capture.
Match MovingMatch moving software is magic. Using only a sequence of images as input it can calculate the actual 3D position of objects in the scene as well as the movement and orientation of the camera which filmed it. Modern match moving software works using analysis of general features in the images themselves and does not require any special trackers or locators. I used
SynthEyes which has support for equilinear rectangular (panoramic) images built in.
StabilizationFirst the images were imported into SynthEyes as a sequence and automatically stabilized. This stabilization step by itself was worth the price of admission. The resulting images were far better aligned and more stable than using any of my manual measurement approaches. After stabilization I used a keyframed rotoscope mask to designate areas of the scene which should not be used for feature tracking. These areas were mainly the shadow of the camera and some lens flares which moved across the view.
Interpolated, color graded and stabilized. (
full rez)
TrackingTracking was run using SynthEyes to automatically find hundreds of trackable image features and to "solve" for their 3D locations. This also computed the camera movement as a path with an orientation at each point. Lastly a coordinate system was defined using three positions picked on the floor of the basement. This gives SynthEyes the information it needs to correctly orient the 3D scene and makes working with the data downstream more convenient.
Match Moving: Tracking features with SynthEyes. (
full rez)
Compare the hand-plotted diagram I used to lay out the camera path on the floor of the basement with the 3D camera path SynthEyes found by analyzing the images:
Camera path as plotted by hand
Camera Path found by feature tracking in SynthEyes
At this point the camera path was exported into a text file which I processed with OpenSCAD to generate the movement barrier object mentioned above. The barrier was exported as an FBX object for use in the Unity scene later.
Making it PlayableI used Unity as my game engine. To assemble the prototype I first brought in the 3D barrier object and gave it a mesh collider to keep the player locked into the correct view path. An FPS character was created with a very narrow capsule collider which fit just into this path.
For the panoramic background I created a large sphere around the camera and assigned a material using a non-culling shader. The core of the system is a script which determines the correct image to display on the sphere for the current player position. This script takes the camera path from SynthEyes and based on the known number of locations in the total path it calculates the index of the image to show for the current camera position. This image is then copied to a render texture which is applied to the sphere material.
So now the player can walk around a virtual replication of my parents' basement. Exhilarating!
Next StepsNext Steps! Get it? Oh man. So yeah unfortunately the thrill of just walking around the room with nothing else to do wears off pretty quickly (like my hilarious blog commentary I’m sure). In order to make this into a game it's necessary to have some interactive elements. In my next post I’ll get into how I made 3D objects interact with the fake background and how I got shadows on it. Also I can tell you a bedtime story about WEBM to DXT extraction! I know you can’t wait but you have to.
-josh