Imagine if you will, a long hotdog-shaped object lying on the Earth - about 100 feet wide, 100 feet tall, and about a mile long.

Now imagine that hotdog-shaped object is lying on the ground in GOOGLE Earth - and it is textured with real ground photography taken from the center of the hotdog.

This is the Immersive Hotdog - the source can be from a video camera collection system such as Immersive Media, or from standard immersive photograph collection systems such as iPIX.

These collections have existed in stand alone viewers for years, however we haven’t visualized this hotdog yet in a truly 3D environment and I think we should.

So I started a little proof of concept experiment on Christmas Eve.

I didn’t have a fancy Immersive Media video camera, but I did have an iPIX kit which I borrowed to begin exploring this problem.

So, on Christmas Eve I walked down the boardwalk in Manasquan, NJ and took a series of iPIX shots every few houses (and brought along my new trusty QSTAR Solar GPS Bluetooth Data logger to capture the precise location of each shot.)

I then returned home and synced the GPS log from the QSTAR logger up to the iPIX fisheye photos using the awesome Google Picture Sync (GPicSync) application.

Here are the results:

(144 geotagged Raw iPIX Fisheye shots which you can download with full GPS EXIF data here)

((Hint: Use the "Download Album" option on the left side to directly download all images to your own Picasa))

((( Because Google's PicasaWeb supports EXIF data, you can view all of these images on a Google Map here)

Each iPIX shot consists of 2 Fisheye shots (180º opposed to one another) which we need to stitch into one Equirectangular image

Here’s the first problem - each picture is taken at a slightly different time from the corresponding shot so the GPS position wavers a little bit in that time.

Thats actually a good thing, because what we can do is average out the 2 positions for the pair and find an averaged, perhaps more accurate position somewhere “between” both shots to apply to the single equirectangular shot that we get when we stitch the photos together.

For example, the 2 shots “exploded” in Google Earth above are only slightly “off” from each other in Lat and Lon, but we would average their positions and apply it to the resulting image below. (The iPIX software currently does NOT offer this option…so the first challenge is to average both positions from the source images and write the average to the EXIF of the resulting image.)

To do this we would need a way to automate the PhotoOverlay tool (it does have a batch process capability), but we also need to be able to define the rotation angle so that the spheres line up properly on the earth.

Also, many of the spheres I create with the PhotoOverlay tool are way to large. i.e the FOV Near setting is too big.

Now, the Immersive Hotdog is limited in Google Earth because we are restricted as of right now to use multiple sphere objects, which turns out something more like a pea-pod than a hotdog.