robots and cartography

Main menu

Category Archives: Moon

Post navigation

During my internship at NASA in 2009, I helped produce an elevation model and image mosaic from Orbit 33 of Apollo 15. This mosaic was later burned into Google Earth’s Moon mode. Earlier this week it appears people have found an image of a man walking in the region of the Moon I stitched together. Here’s links to articles about this supposed extra terrestrial at The Nation, News.com.au, AOL.com, and Examiner.com. Thank you to LROC’s Jeff Plescia for bringing this to my attention.

I quickly traced out that this section of the image mosaic comes from AS15-M-1151. This is a metric camera image from Apollo 15 that was scanned into digital form sometime in 2008 by ASU. What is shown in Google Earth is a reprojection of the image on to a DEM created by Ames Stereo Pipeline using said image. The whole strip of images was then mosaicked together using ASP’s geoblend utility. So this man could have been created by an error in ASP’s projection code. Below is the man in the moon from the raw unprojected form of the Apollo Metric image. Little man perfectly intact.

Unfortunately if you look at the next image in the film reel, AS15-M-1152, the man is gone. This is true also for 1153 and 1154. After that, the Apollo command module was no longer over looking the area. The metric camera takes a picture roughly every 30 seconds, so maybe the guy (who must be like 100 meters tall) just high tailed it.

These images come from film that had been in storage for 40 years. They were lightly dusted and then scanned. Unfortunately a lot of lint and hair still made it into the scans that we used for the mosaic. So much so, that Ara Nefian at IRG developed the Bayes EM correlator for ASP to work around those artifacts. Thus, this little Man in the image was very likely some hair or dust on the film. In fact if you search around the little man in image 1151 (in the top left corner of the image, just off an extension of a ray from the big crater) you’ll find a few more pieces of lint. Those lint pieces are also visible in Google Moon. However, it is still pretty awesome to find out others have developed a conspiracy theory on your own work. Hopefully it won’t turn into weird house calls like it did for friends of mine over the whole hidden nuclear base on Mars idea.

Update: You can find the Bad Astronomer’s own debunking of this man here. The cool bit is he tried to find the artifact in LRO and LO imagery. He then links to a forum where someone identifies that the dust was actually in the optics of the camera or in the scanner bed. So the man and other pieces of lint can be seen at roughly the same pixel location in consecutive frames.

Since I last wrote, we’ve hired a new full-time employee. His name is Scott and we assigned him the task of learning ASP and LROC. The first utilities he’ll be contributing back to ASP are lronacjitreg and lronac2mosaic.py. The first utility is very similar to the ISIS utility with a similar name designed for HiRISE. The second utility, lronac2mosaic.py, uses the first tool and can take LRO-NAC EDR imagery and make a non-projected image mosaic. What lronac2mosaic.py does internally is ‘noproj’ and then ‘handmos’ the images together. There is an offset between the images due to model imperfections. Finding the correct offset so the combined images are seamless is the task of lronacjitreg. All of this just a streamed line version of what I wrote in a past blog post.

Previously users and our team only had the option to run all 4 combinations of the 4 LRO-NAC input files through ASP and then glue them together afterwards. Now with the use of lronac2mosaic, we can feed whole LRO-NAC observations into ASP and receive the full DTM in one go. No messy mosaicking of 4 files.

I’ve used Scott’s program successfully to recreate most DTMs that ASU has made via SOCET SET. Using my home server, I’ve been able to recreate 77 of their DTMs to date. We’ve been fixing bugs as we hit them. One of the biggest was in our search range guessing code. The next upcoming release of ASP will have the fruits of that labor. Previously ASP had a bad habit of ignoring elevation maximas in the image as it thought those IP measurements were noise. Now we should have a better track record of getting measurements for the entire image.

One of the major criticisms I’m expecting from the large dump of LRO-NAC DTMs we expect to deliver next year is what is the quality of the placement of our DTMs in comparison to LOLA. Another engineer we have on staff, Oleg, has just the solution for this. He has developed an iterative closest point (ICP) program called pc_alignwhich will be in the next release. This is built on top of ETHZ Autonomous System Lab’s libpointmatcher and has the ability to take DTMs and align them to other DTMs or LIDAR data. This enables us to create well-aligned products that have height values agreeing within tens of meters with LOLA. Our rough testing shows us having a CE90 of 4 meters against LOLA after performing our corrections.

We’re not ready for the big production run yet. One problem still plaguing our process is that we can see the CCD boundaries in our output DTMs. We believe most of this problem is due to the fact that the angle between line of sight of the left and right CCDs changes with every observation. ISIS however only has one number programmed into it, the number provided by the FK. Scott is actively developing an automated system to determine this angle and to make a custom FK for every LRO-NAC observation. The second problem we’re tracking is that areas of high slope are missing from our DEMs. This is partially because we didn’t use Bayes EM for our test runs but it also seems like our disparity filtering is overly aggressive or just wrong. We’ll get on to that. That’s all for now!

Ames Stereo Pipeline is currently a candidate in the running for NASA’s Software of the Year award. We needed a pretty graphic and decided that making a cool and possibly realistic rendering of Moon would fit the bill. This is a little more difficult than simply hill shading because the Moon has a specular component to it. Hill shading can be interpreted as being only the diffuse component of the phong model. An interesting example of the Moon’s specular compoent is this picture taken with a Hasselblad during Apollo 17.

Below, are videos of my results where the Sun’s projected coordinates sweep from 90 W longitude to 90 E. Both these views are showing map projected imagery, thus this isn’t a true perspective shot. The difference between these videos is the input observer’s altitude above the surface. Lower altitude and more of the specular component can be seen.

I’m using nothing but Apollo Metric imagery for this example. The DEM source was our product for LMMP. The Albedo source was the Apollo Metric Albedo map that Dr. Ara Nefian produced and will eventually be in NASA’s PDS. The photometric model was the Lunar-Lambertian model as described by McEwen’s paper. Shadows were not rendered because that seemed harder than I could accomplish in 24 hours.

Mark Rosiek from USGS Astrogeology expressed some doubt in my noproj recipe for LRO-NAC. This is completely reasonable because if everything were perfect, there would be no offset after the noproj step between the LE and RE CCDs. He requested 2 DEM samples of Tsiolkovsky Crater so he could compare to USGS work. I decided I’d try the same during free time through out the day. Unforunately I couldn’t find a trusted reference DEM to compare against. I can’t find ASU’s result of this area on their RDR webpage and it is impossible to use LMMP Portal. Can you even download the actual elevation values from LMMP? No I don’t want your color render or greyscale!

Next best idea is to just difference the two DEMs I created and compare them. I didn’t bundle adjust these cameras at all so their placement to each other is off by some ~150 meters. After ICPing them together and then differencing them I can produce an error map. The gradient or shape of the error can help clue into how bad my fiddling in the ideal camera was. Below is the result between the stereo pairs M143778723-M143785506 and M167363261- M167370048.

You can definitely see the CCD boundary in this error map. You can see the CCD boundary in the hillshade. It’s about a 1 meter jump when looking at a single DEM. Error map seems to agree with this and shows a peak error at 4 meters in the CCD boundary location.

So what is the cause of these errors? Well we have two sources, (1) the projection into the ideal camera model and (2) bad spacecraft ephemeris. I’d argue that most of the difference between these two DEMs is the spacecraft ephemeris. That’s top priority in my book to correct. However when I look at the disparity map for these stereo pairs, there is a definitive jump at the CCD boundary. The cause of that would be an improper model of the angle between LE and RE cameras in ISIS. This should be expected. They’re currently not modeling the change in camera angles with respect to temperature. Also when looking at the vertical disparity, it seems one side is always brighter than the other. This suggests that I didn’t quite have the correct values for input to handmos.

Trying to fix all of this now might be a mistake on my part. I know that ASU will eventually produce new SPICE data that contains corrections from LOLA and a USGS study. I also imagine that eventually the work of E. J. Speyerer et al. will be implemented in ISIS.

Earlier this year I found out that I got my first proposal funded. I’ve had directed funding before thanks to NASA’s cyrosphere program. I’ve also been a Co-I on numerous other funded proposals with co-workers and friends at USGS. However my LASER proposal to perform Mass DEM production from LROC-NA imagery was something I wrote as PI and was competed. Now that it is accepted, I have two years to follow through and I’d like to share the whole process here on the blog.

The first problem I’m going to write about has bugged me for a while. That problem is that each LROC-NA observation is actually 2 files and makes using ASP awkward. In the past I’ve been telling people to run perform 4 runs with stereo. Do the combinations of LE-LE, LE-RE, RE-LE, and RE-RE. However UofA had the same problem with HiRISE, which comes down in 20 different files. They had worked out an ISIS process chain that would noproj those files together among other things to make a single observation. I don’t know why such step doesn’t exist for LROC-NA but today I’ll show you what I’ve come up with.

If you try right now to noproj the RE cub file to the LE cube file you’ll find that the program errors out because an ideal camera model for LROC has not been defined. Definitions for ideal noproj cameras for all ISIS cameras are defined in a file at $ISIS3DATA/base/applications/ noprojInstruments003.pvl. Looking at that file we see that there are 5 elements that can be defined for the ideal camera model, TransY, ItransS, TransX, ItransL, and DetectorSamples. DetectorSamples is easy; it’s what the output image width should be in the final image. The Trans* variables are measured in millimeters and define a focal plane offset from the original camera model we are using. I plan to noproj with the match file being the LE file. Thus Trans* references a shift from the original position of the LE sensors. The Itrans* variables are pixel conversion of the millimeter measurements. It’s used by ISIS to run the math the other direction. If Trans* and Itrans* variable don’t match, funny errors will occur where the CCDs noproj correctly but the triangulation in ASP is completely bonkers. Relating the two is easy, just use the pixel pitch. For LROC-NA that is 7 micrometers per pixel.

So how do we decide what to set the TransY and TransX values to be? If those values are left to zero, the LE image will be centered on the new image and the RE will be butted up beside but falling off the edge of the new image. A good initial guess would be to set TransY to be a shift half the CCD width. A better idea I thought to use was to look at the FK kernel and identify the angle differences between the two cameras and then use the instantaneous field of view measurement to convert to pixel offset between the two CCD origins. Use pixel pitch to convert to millimeters and then divide by two will be the shift that we want. Below are the final noproj settings I used for LROC.

Alas, the LE and RE imagery does not perfectly align. In the HiRISE processing world we would use hijitreg to determine a mean pixel offset. There is no generic form of hijitreg that we can use for LROC-NA. There is the coreg application, but in all my tests this program failed to find any correct match points between the images. I’ve tried two other successful methods. I’ve manually measured the offset using Qtie. Warning: Sending this images to Qtie requires twiddling with how serial numbers are made for ideal cameras. This is something I should document at some point as it would allow bundle adjusting ideal cameras like fully formed HiRISE and LROC images. My other solution was the example correlate program in Vision Workbench. I did the following to make processing quick (5 minutes run time).

That creates a disparity TIF file. The average of the valid pixels (third channel is 1) can then be used to solve for the mean translation. That translation can then be used during handmos by using the outsample and outline options. Ideally this would all be one program like hijitreg but that is for another time. Below is the final result.

Hijitreg actually does more than just solve for the translation between CCDs on HiRISE. It correlates a line of pixels between the CCDs in hope of determining the jitter of the spacecraft. I can do the same!

From the above plots, it doesn’t look like there is much jitter or the state of the data is not in form that I could determine. The only interesting bit here is that the offset in vertical direction changes with the line number. I think this might be disparity due to terrain. The imagery I used for my testing was M123514622 and M123521405, which happened to be focused on the wall of Slipher crater. The NA cameras are angled 0.106 degrees off from each other in the vertical direction. Ideal stereo geometry would 30 degrees or 15 degrees, but 0.106 degrees should allow some disparity given the massive elevation change into the crater. I wouldn’t recommend using this for 3D reconstruction but it would explain the vertical offset signal. The horizontal signal has less amplitude but does seem like it might be seeing spacecraft jitter. However it looks aliased, impossible to determine what the frequency is.

Anyways, making a noproj version of the LROC-NA observation is a huge boon for processing with Ames Stereo Pipeline. Using the options of affine epipolar alignment, no map projection, simple parabola subpixel, and it is possible to make a beautiful DEM/DTM in 2.5 hours. 70% of that time was just during triangulation because ISIS is single threaded. That would be faster with the application parallel_stereo (renamed from stereo_mpi in ASP 2.2.2).

Ross Beyer, my co-worker at NASA, performed a Google Tech Talk early this month. It is now available online and you can watch it above. I don’t want to give any spoilers, but a certain stereo something is mentioned in this talk.

I’ve previously written a tutorial on performing a bundle adjustment, jigsaw in ISIS lingo, using CTX. Let’s try something a little bit more complex with LRO-NAC. If you are not previously aware, LRO-NAC is actually two separate optical assemblies that are joined by the spacecraft bus. Ideally this spacecraft hardware would be solid as a rock and wouldn’t bend. Unfortunately that’s not the case, so what would have been a bundle adjustment between 2 cameras is now 4 in the case of LRO-NAC.

LRO-NAC Data Preparation

Picking a stereo pair has always been the most difficult part of the job for me. I honestly don’t know how to find stereo pairs. So instead I’m just going to recreate something ASU did in SOCET SET. If you are not aware, ASU has processed about 100 DEMs with LRO-NAC. I’m going to recreate their Moore F Crater DEM using M125713813 and M125720601. You can download the raw data via PDS.

I picked this stereo pair blindly and it ended up being extremely difficult to register correctly. In order to help out, I added a fifth image to the bundle adjustment to give another orbit’s worth of spacecraft ephemeris into the measurements. That image was M110383422LE and I picked it because it overlapped 2 of my previous images and had the same lighting conditions as my previous images.

Once you have downloaded all the images, prep the files in ISIS with the following:

* The ‘parallel’ command is GNU Parallel, a non-standard system utility that I love dearly.

Attaching a custom Elevation Source

From a previous article I wrote about, we saw that map projecting against the WAC Global DTM gave much better results than working against the sparse LOLA that is the default for map projecting by in ISIS. So I’m going to do that now. Once we have a successful jigsaw result, then we’ll map-project. If you forgot how to attach the WAC Global DTM, here are the commands I used. My image pair was at 37.3N and 185E. So the tile WAC_GLD100_E300N2250_100M.IMG from this link was what I needed.

I had to spend a lot of time in Qnet for this stereo pair. About half of my automatic matches didn’t work. (Possibly my search range was too small?) In order to correct this, I had to then manually edit the output control network in Qnet. I then filter down all the control points to the ones marked ‘Ignored’. Then I just manually align the measurements. You just need to get the measures close to the right spot by clicking on the same feature. Afterwards, you can then click the ‘Register’ button to get subpixel accurate alignment.

Autoseed and AutoReg also have a bad habit of only matching LE images to LE, and RE images to RE. It is a good idea to run along the edge of an LE image and try to find common points that are seen in both RE images. These are going to be the few control points that have more than 2 control measures. I also had to do this between my 5th image and the other 2 images that it had overlap with. This problem was likely because the overlap between these 3 images didn’t meet the requirement of MinimumThickness in Autoseed.def.

Normally at this point I would then recommend to you to start picking out ground control points. I tried that. Then I went a little insane. I’m better now. Instead we’re going to do crazy stuff in the next section.

The reason we’re not using GCPs, is that the only source to align against is the WAC Image mosaic. I tried this and I could easily get a bundle adjustment result that aligned well with the image mosaic. However the output DEM would always show a massive error between my height results and the results of LOLA and the WAC Global DTM. After adding the 5th image and clamping the trust of my GCP to 50 meters in radius, I got a result that matched LOLA and WAC DTM in an okay manner but didn’t really align at all to the WAC image mosaic. Things made sense once I compared the image mosaic to a hillshade of WAC DTM.

WAC’s Image mosaic doesn’t agree with WAC’s Global DTM! Ghaaaa! Madness! I was so incredibly disappointed. I didn’t know what to do at this point, so I just dropped the ground control points.

Bundle Adjusting

Alrighty then, time to start bundle adjusting with jigsaw. Likely on the first run things are going to blow up because bad measurements still exist in the control network. My solution to this was to first have jigsaw solve for only a few camera parameters and then check out the output control network for features that had the highest error. In Qnet you can filter control measures that have errors higher than a certain amount of pixels. After I fixed those measurements manually, I resave the control network as my input network and then redo the jigsaw.

Each time I redo jigsaw, I let jigsaw solve for more parameters. My pattern was:

After iterating many times, I eventually produced a control network and jigsaw result that had a sigma0 of .34 pixels. However, if we’re to write this camera solution to file and then make a DEM, we would see that I have worse alignment with LOLA than if we hadn’t bundle adjusted at all.

So how can we get a good bundle adjustment result that aligns well with LOLA? One idea would be to pick GCPs directly against LOLA using their gridded data product. I know folks in the USGS who have done exactly this. However I don’t even see how they can determine correct alignment, so I’m going to ‘nix that idea. Another idea would be to have Jigsaw not solve for radius (radius=no), but instead use only the radius values provided by LOLA. This will keep problem in the ball park, but the resulting jigsaw solution will have a sigma0 around 15 pixels! Not solving for radius essentially ties our hands behind our back in the case of LRO-NAC when our radius source is so incredibly low res.

What we need instead is to simply constrain / rubber-band our Control Points to have radius values that are similar to LOLA. Unfortunately, how do we determine what correct Lat Long to sample the LOLA radius for each control measure? My solution was to not define that. I did two things: (1) I changed all my control measures from ‘free’ to ‘constrained’ with a sigma of 200,200,50. This can be performed with cneteditor. (2) I then wrote and ran the following bash script.

Your mind should be blown at this point. I’ve essentially written an iterative closest point algorithm using the ISIS utilities jigsaw and cnetnewradii. This process is only able to work because we have a perfect control network that we vetted with the previous jigsaw runs. This also works because our LOLA data here is not flat, and has plenty of texture. Finally, LRO-NAC was close to the correct solution. Imagine images with horrible attitude data, like Apollo, would just blow up with this technique.

On each iteration, the sigma0 shouldn’t change. Or if it changes, it should be extremely small. That’s because a translation or scale change of the control points and camera location has no effect on the reprojection error. What should be reducing on each iteration is the standard deviation of the radius correction for each control point as listed in bundleout.txt. The sparse 3D model of the problem in Jigsaw is slowly fitting itself to the LOLA DEM but constrained only by the camera geometry. The standard deviation of the radius correction will never go to zero and will eventually plateau. This is because there will always be some error against LOLA since large portions of the measurements are interpolations between orbits.

Disclaimer: I’ve only tried this technique for this LRO-NAC stereo pair. I don’t know if it will work for others. However this was the only technique I could come up with that would produce results that best aligned with LOLA. I couldn’t get anything good to come out of using WAC Image Mosaic GCPs.

Map projecting for Stereo

Now that we have an updated ephemeris, we’re ready to perform a map projection to speed up stereo. Be sure to not accidentally spiceinit your files, which would wipe your updated ephemeris. Notice that I’m map projecting everything at a low resolution. This is because I’m in a hurry for this demo. In a production environment for LRO-NAC, we would likely map project at 1 MPP or 0.5 MPP. This step and ASP’s triangulation are usually the longest part of this entire process.

Running ASP

Nothing special here, I’m just processing the LE-LE, RE-RE, and LE-RE combination. RE-LE stereo pair wouldn’t resolve anything at 5 m/px. Afterwards, I gdalbuildvrt the outputs together. The massive lines separating the individual DEMs that make up this stereo pair may concern you. However that effect will diminish if I had map projected my inputs at a higher resolution. The reason being that, correlation is much like convolution, in that we erode the input image by a half kernel size along the image border. That border gets perceivably smaller if the image is of higher resolution.

Conclusion and Comparison

Now the moment you’ve been waiting for. The point where I have to prove I’m not crazy. I have three versions of this stereo pair. There is the result produced by ASU using SOCET SET and then there’s ASP with and without Bundle Adjustment. The look of ASP’s result will be a little shoddy because I map projected my input at 5 m/px which prevent me from get the most detail out. It did however save me a lot processing time while I experimented this method out.

Here are the hillshades for those 3 datasets overlaid a hillshade of the LRO-WAC Global DTM which is 100 m/px. There’s not much to tell, but you can at least see that everyone seems to have decent horizontal alignment.

Here are the difference maps between those 3 datasets and the LRO-WAC Global DTM.

Here are the difference maps between those 3 datasets and the LOLA Gridded Data Product, which I resampled to 100 m/px. I think this shows straight forward the improvement bundle adjustment made to the output DEM.

Just for giggles, I also did a difference between the ASU DEM and the bundle adjusted ASP DEM. They’re pretty close to each other. There seems to just be a simple tilt between the two or that the ASP DEM is just shifted to a slightly lower latitude. Who’s more correct is anyone’s guess for now.

Ideally at this point I would then compare the ASU DEM and the BA’d ASP DEM against the raw LOLA shot points. This is where curiosity got me and where I move on to the next personal project. Maybe someone else would be interested in a more rigorous review.