True Orthovideo for Geographic Immersive Virtual Environments


If you wondered why postdoctoral student Tom Pingel and his undergraduate assistant Willie McBride were messing around with a big balloon on campus a couple of weeks ago, the following may enlighten you: “True Orthovideo for Geographic Immersive Virtual Environments” by Thomas J. Pingel and William McBride:

As part of the wider Automation and Visualization in Geographic Immersive Virtual Environments project, we seek to embed data derived from audio and video sources into our models. Part of this involves extraction of three dimensional geodata from the video stream, but we also seek to directly overlay the video onto the model, using image texturing techniques. X3D directly supports video texturing and was one strong reason for the adoption of X3D as the model’s native language.

While we are interested in incorporating ground-based video streams into the model, we are particularly inspired by Grassroots Mapping, an organization dedicated to low-cost and readily deployable tethered aerial photography from balloons and kites. While our colleagues at UCSB, Alan Glennon and Kitty Courier, are working on kite photography, we have elected to base our platform on the balloon. We use a Mylar balloon constructed from emergency blankets and lifted by helium. Solar balloons offer the potential to capture video without the need for helium apparatus.

We use the Android-based Motorola Droid as our sensing platform because it allows us to record audio and video, and to capture movement through onboard GPS, digital compass, and accelerometer sensors. The overhead imagery that we collect allows for somewhat easier object position capture than a ground-based video feed, but we often launch a second Droid as we are also interested in extracting 3D objects via stereoscopic vision. A Wi-Fi link to the Droid allows us to stream the video and location data as it is captured. We typically launch the balloons to a height of 50 to 100 meters. Given the limited range of heights available to us, we are also planning to launch a GoPro camera with a fisheye lens so that we can record a much larger field of view.

As with all other aerial imagery, the video frames must be orthorectified before they can be projected onto a map. While coarse terrain correction is common in orthophoto correction, the sharp discontinuities in ground surfaces derived from lidar present interesting challenges for high-resolution orthoimagery creation. Our goal is to develop a methodology that allows us to merge video streams with lidar-derived ground surfaces based on the position and orientation data that we capture directly from the sensor, rather than through feature identification and the placement of targets in the image. For a static-image example of the type of problem that we are working on, see J. Kersting’s animation of fused true orthophoto and lidar at the University of Calgary

Editor’s note: Tom also posted two video extracts on You Tube here. Many thanks to Professor Keith Clarke for photographing this historic event!

balloon-filled_sm.jpg<|>1200<|>Tom and Willie filling the Mylar balloon with helium for the first test flight{|}balloon-starting.the.droid_sm.jpg<|>1200<|>Installing the Motorola Droid{|}balloon-ascent_sm.jpg<|>1200<|>Liftoff!{|}balloon-fishing.rod.control_sm.jpg<|>1200<|>Fishing rods used for control!{|}balloon-flight.image.01.jpg<|>720<|>Image taken from the first flight next to the Physical Sciences building on the UCSB campus

Please follow and like us: