Large area mapping at high
resolution underwater continues to be constrained by sensor-level environmental
constraints and the mismatch between available navigation and sensor accuracy.
In this paper, advances are presented that exploit aspects of the sensing
modality, and consistency and redundancy within local sensor measurements
to build high-resolution optical and acoustic maps that are a consistent
representation of the environment. This work is presented in the context
of real-world data acquired using autonomous underwater vehicles (AUVs)
and remotely operated vehicles (ROVs) working in diverse applications including
shallow water coral reef surveys with the Seabed AUV, a forensic survey
of the RMS Titanic in the North Atlantic at a depth of 4100 m using the
Hercules ROV, and a surveyof the TAG hydrothermal vent area in the mid-Atlantic
at a depth of 3600 m using the Jason II ROV. Specifically, the focus is
on the related problems of structure from motion from underwater optical
imagery assuming pose instrumented calibrated cameras. General wide baseline
solutions are presented for these problems based on the extension of techniques
from the simultaneous localization and mapping (SLAM), photogrammetric and
the computer vision communities. It is also examined how such techniques
can be extended for the very different sensing modality and scale associated
with multi-beam bathymetric mapping. For both the optical and acoustic mapping
cases it is also shown how the consistency in mapping can be used not only
for better global mapping, but also to refine navigation estimates.