Tag Archives: imagery

Post navigation

In commemoration of completion of the Panama Canal Expansion Project, and in tribute to the upcoming official opening on June 26, we present a series of before and after satellite photos highlighting the Expansion Project and showcasing this engineering marvel.

Work on the Panama Canal Expansion took nearly 9 years to complete, starting in September 2007, at a cost of US$5.2 billion. By adding a third set of locks on both the Pacific and Caribbean sides of the canal, dredging the existing navigation channel, adding a new approach channel on the Caribbean side and a new 6.1 km access channel on the Pacific side, and raising the Gatun Lake maximum operating level, the Expansion doubles the capacity of the Canal and significantly increases the size of vessels that can transit the Canal.

Below we present before and after satellite images of the newly expanded Canal, provide an overview of the Expansion Project, show a rare nearly-cloudless image from Landsat-5, and even include one of the earliest Landsat images of the Panama Canal acquired by Landsat-1 on March 18, 1973.

Satellite views of the Pacific Ocean entrance to the Panama Canal, before (left; Landsat-7 on November 20, 2002) and after (right; Landsat-8 on June 11, 2016) the Expansion Project. Note the addition of the third set of locks, the three sets of water reutilization basins immediately adjacent to the new locks, and the new access channel that now bypasses Miraflores Lake.

Satellite views of the Caribbean Sea entrance to the Panama Canal, before (left; Landsat-7 on May 28, 2002) and after (right; Landsat-8 on February 20, 2016) the Expansion Project. Note the addition of the third set of locks, the three sets of water reutilization basins immediately adjacent to the new locks, and the new approach channel.

Do you work with HICO imagery? Are you planning a project using HICO? Or perhaps you’re just interested in exploring where HICO will be acquiring imagery in the coming days?

If so, be sure to check out the ISS Orbit tool on the HICO website at Oregon State University. This tool allows you to interactively visualize the location of HICO ground track locations using Google Earth.

The tool shows predicted HICO ground tracks in selected 1- or 3-day intervals up to six months in the future. However, even though orbital files are updated regularly, because of uncertainties in future ISS orbit specifics, the prediction is most accurate 2-3 days into the future and declines thereafter. So be cautious when planning fieldwork or image acquisitions for any extended time period.

For more information on ISS orbits characteristics, visit the NASA Space Station Orbit tutorial.

The ground tracks are displayed only for local daylight hours, and illustrate the nominal ground track (shown in teal above) as well as the full width available using HICO’s pointing capabilities (shown in grey above). Users have the option of also displaying the place names and locations of scheduled target areas for both ascending and descending orbits. Additionally, as the zoom level is increased, yellow dots appear in the visualization indicating the predicted time and date the ISS will pass over that location.

The HICO ISS Orbit tool requires the Google Earth plugin, which is available in Chrome, Firefox and IE (note that IE users may need to add the oregonstate.edu website to Compatibility View in the tool settings).

Let’s look at an example. Say you’re interested in exploring when HICO will be available to acquire imagery of Melbourne Harbor from April 5-11. Using the tool to step through the ISS orbits for those dates, it is revealed that Melbourne Harbor can be acquired on April 5 @ 22:26 and 5:45 GMT, April 6 @ 4:56 GMT and April 9 @ 4:05.

ISS Orbit tool: HICO – Melbourne Harbor 5-April-2014

ISS Orbit tool: HICO – Melbourne Harbor 6-April-2014

ISS Orbit tool: HICO – Melbourne Harbor 9-April-2014

Now let’s extend this example to see if Hyperion data is also available for Melbourne Harbor for the same dates. To do so, you will need to utilize COVE, a similar tool (best in Chrome or Firefox) with robust capabilities for visualizing ground tracks of numerous Earth observing satellites (but unfortunately not HICO or any other instruments on the ISS). Visit our earlier post for an overview of COVE’s capabilities.

Using COVE, it can be seen that Hyperion data is available for acquisition of Melbourne Harbor on April 9 @ 23:16 GMT. This closely coincident acquisition opportunity might provide some interesting data for comparing hyperspectral analysis techniques using HICO and Hyperion.

COVE tool: Hyperion – Melbourne Harbor 5-April-2014

So be sure to check out both the COVE and HICO ISS Orbit tools when planning your next mission.

About HICO (http://hico.coas.oregonstate.edu/): “The Hyperspectral Imager for the Coastal Ocean (HICO™) is an imaging spectrometer based on the PHILLS airborne imaging spectrometers. HICO is the first spaceborne imaging spectrometer designed to sample the coastal ocean. HICO samples selected coastal regions at 90 m with full spectral coverage (380 to 960 nm sampled at 5.7 nm) and a very high signal-to-noise ratio to resolve the complexity of the coastal ocean. HICO demonstrates coastal products including water clarity, bottom types, bathymetry and on-shore vegetation maps. Each year HICO collects approximately 2000 scenes from around the world. The current focus is on providing HICO data for scientific research on coastal zones and other regions around the world. To that end we have developed this website and we will make data available to registered HICO Data Users who wish to work with us as a team to exploit these data.”

About Hyperion (http://eo1.gsfc.nasa.gov/ and http://eo1.usgs.gov/): “The Hyperion instrument provides a new class of Earth observation data for improved Earth surface characterization. The Hyperion provides a science grade instrument with quality calibration based on heritage from the LEWIS Hyperspectral Imaging Instrument (HSI). The Hyperion capabilities provide resolution of surface properties into hundreds of spectral bands versus the ten multispectral bands flown on traditional Landsat imaging missions. Through these spectral bands, complex land eco-systems can be imaged and accurately classified.The Hyperion provides a high resolution hyperspectral imager capable of resolving 220 spectral bands [from 400 to 2500 nm] with a 30-meter resolution. The instrument can image a 7.5 km by 100 km land area per image, and provide detailed spectral mapping across all 220 channels with high radiometric accuracy.”

The GFZ German Research Center for Geosciences and HySpeed Computing announce the first ever simulation of a coral reef scene using the EnMAP End-to-End Simulation tool. This synthetic, yet realistic, scene of French Frigate Shoals will be used to help test marine and coral reef related analysis capabilities of the forthcoming EnMAP hyperspectral satellite mission.

EeteS simulation of EnMAP scene for French Frigate Shoals, Hawaii

EnMAP (Environmental Mapping and Analysis Program) is a German hyperspectral satellite mission scheduled for launch in 2017. As part of the satellite’s development, the EnMAP End-to-End Simulation tool (EeteS) was created at GFZ to provide accurate simulation of the entire image generation, calibration and processing chain. EeteS is also being used to assist with overall system design, the optimization of fundamental instrument parameters, and the development and evaluation of data pre-processing and scientific-exploitation algorithms.

EeteS has previously been utilized to simulate various terrestrial scenes, such as agriculture and forest areas, but until now had not previously been used for generating a coral reef scene. Considering the economic and ecologic importance of coral reef ecosystems, the ability to refine existing analysis tools and develop new algorithms prior to launch is a critical step towards efficiently implementing new reef remote sensing capabilities once EnMAP is operational.

The input imagery for the French Frigate Shoals simulation was derived from a mosaic of four AVIRIS flightlines, acquired in April 2000 as part of an airborne hyperspectral survey of the Northwestern Hawaiian Islands by NASA’s Jet Propulsion Laboratory. Selection of this study area was based in part on the availability of this data, and in part due to the size of the atoll, which more than adequately fills the full 30 km width of an EnMAP swath. In addition to flightline mosaicking, image pre-processing included atmospheric and geographic corrections, generating a land/cloud mask, and minimizing the impact of sunglint. The final AVIRIS mosaic was provided as a single integrated scene of at-surface reflectance.

For the EeteS simulation, the first step was to transform this AVIRIS mosaic into raw EnMAP data using a series of forward processing steps that model atmospheric conditions and account for spatial, spectral, and radiometric differences between the two sensors. The software then simulates the full EnMAP image processing chain, including onboard calibration, atmospheric correction and orthorectification modules to ultimately produce geocoded at-surface reflectance.

The resulting scene visually appears to be an exact replica of the original AVIRIS mosaic, but more importantly now emulates the spatial and spectral characteristics of the new EnMAP sensor. The next step is for researchers to explore how different hyperspectral algorithms can be used to derive valuable environmental information from this data.

Are you planning a remote sensing mission? Do you want to see the coverage of different satellite instruments before acquiring imagery or conducting your fieldwork? If so, then it’s definitely worth exploring the CEOS Visualization Environment (COVE) tool to visualize where and when different instruments are observing your study area.

COVE is “a browser-based system that leverages Google-Earth to display satellite sensor coverage areas and identify coincidence scene locations.” It is a collaborative project developed by the Committee on Earth Observation Satellites (CEOS) Working Group on Calibration and Validation and the NASA CEOS System Engineering Office (SEO). The COVE tool is available online at: http://www.ceos-cove.org/

The COVE web portal includes a suite of three main tools for planning remote sensing missions: the core COVE tool, which provides visualizations of instrument coverage; the Rapid Acquisition Tool, which allows users to identify and predict when sensors will cover specified study areas; and the Mission and Instrument Browser, which provides descriptions of the hundreds of different missions and instruments included in the COVE database.

So what can COVE do for you? As example, let’s assume you want to acquire coincident Landsat-8 and WorldView-2 imagery over the reefs of southwestern Puerto Rico later this year. You can use COVE to calculate when and where instrument coverage will overlap, and hence schedule your associated fieldwork and other mission planning accordingly. In this example, as shown here, Landsat-8 and WorldView-2 will overlap southwestern Puerto Rico on Nov-25-2013.

Ground swaths for Landsat-8 (left) and WorldView-2 (right) on Nov-25-2013

Ground swath overlap for Landsat-8 and WorldView-2 on Nov-25-2013 in southwestern Puerto Rico

As you would expect, COVE includes many other options. Among these is the ability to incorporate different overlays, such as average annual and monthly cloud cover and precipitation, as well as simultaneously display up to four globes at a time. Additionally, results can also be saved and exported to STK, KML, and as a 2D global image. Given its usefulness and versatility, COVE has definitely found a permanent home in our mission planning toolbox.

Global ground swath for Landsat-8 on Nov-25-2013

About CEOS: “Established in 1984, the Committee on Earth Observation Satellites (CEOS) coordinates civil space-borne observations of the Earth. Participating agencies strive to enhance international coordination and data exchange and to optimize societal benefit. Currently, 53 members and associate members made up of space agencies, national, and international organizations participate in CEOS planning and activities.”

Do you crave information on remote sensing, satellite technology, exploration and other innovative space-related topics? Are you interested in learning more about specific NASA missions, or just want to browse through NASA’s many images and visualizations? We’ve put together a list of mobile Apps that should help satisfy your craving:

NASA Science: A Journey of Discovery. “This NASA Science application brings you the latest information from NASA’s Science Missions, including the spacecraft, their instruments, the data, and what we are learning about the questions we seek to answer.” – iPad

NASA Visualization Explorer. “This is the NASA Visualization Explorer, the coolest way to get stories about advanced space-based research delivered right to your iPad. A direct connection to NASA’s extraordinary fleet of research spacecraft, this app presents cutting edge research stories in an engaging and exciting format.” – iPad

Earth-Now. “NASA’s Earth Now is an application that visualizes recent global climate data from Earth Science satellites, including surface air temperature, carbon dioxide, carbon monoxide, ozone, and water vapor as well as gravity and sea level variations. The resulting 3D model of the Earth may be rotated by a single finger stroke, and may also be zoomed in or out by pinching 2 fingers.” – iPhone/iPad/Android

Space Images. “NASA/JPL’s Space Images app offers a unique view of the sky via hundreds of images taken by spacecraft studying planets, stars, galaxies, weather on Earth and more. Save to your device as backgrounds or wallpaper and share them with friends on Facebook, Twitter and email as you scan through our extensive photo albums and rate your favorites.” – iPhone/iPad/Android

Spinoff 2012. “NASA Spinoff profiles the best examples of technology that have been transferred from NASA research and missions into commercial products. From life-saving satellite systems to hospital robots that care for patients and more, NASA technologies benefit society. There’s more space in your life than you think!” – iPad/online, published annually

Just the other day we were discussing earth observing missions and a question arose regarding availability of a graphic or video illustrating the current satellite orbits. So we set out to answer that question and quickly discovered NASA’s ‘Eyes on the Earth.’

Have you seen this application? If you’re interested in satellites and remote sensing it’s definitely worth your time to download and explore. Just visit http://eyes.jpl.nasa.gov/earth/ and hit the start button to install the desktop application.

NASA is known for producing stunning visualizations that transform complex science into meaningful informational graphics. Anyone who has attended a NASA presentation or visited their website can attest to the thought and innovation that go into creating these products. The ‘Eyes on the Earth’ application is no exception to this creativity, bringing together an array of exceptional visualizations into a single integrated package.

The opening visualization of ‘Eyes on the Earth’ shows the current real-time position and orbital paths of NASA’s Earth observing satellites. The display is interactive, so you can easily rotate the planet into any desired position. There are also options to increase the speed of the visualization (bottom of screen), switch to full screen (lower right), toggle on/off city names and topography (upper right drop down menu), and adjust between realistic day/night lighting versus full illumination (lower right). The entire visualization can also be viewed in anaglyph 3D, assuming you have a pair of appropriate 3D glasses.

Want to learn more about a particular mission or specific type of observation? The application allows you to select from a number of pre-defined visualizations that portray different ‘vital signs of the planet’, including global temperature, carbon dioxide, carbon monoxide, sea level, and ozone. The different options can be accessed through buttons at the top of the interface or by selecting an individual satellite from within the central display. Users can then adjust the date range as well as type of data used for generating the display (daily versus three day averaging). With all these options, there’s a wealth of information available at your fingertips.

As a bonus, the NASA Eyes Visualization also includes ‘Eyes on the Solar System’ and ‘Eyes on Exoplanets’, which are equally intriguing to explore.

So sit back, set the application to full screen, and enjoy the experience.

Lidar (light detection and ranging), also commonly referred to as LiDAR or LIDAR, is an “active” remote sensing technology, whereby laser pulses are used to illuminate a surface and the reflected return signals from these pulses are used to indicate the range (distance) to that surface. When combined with positional information and other data recorded by the airborne system, lidar produces a three-dimensional representation of the surface and the objects on that surface. Lidar technology can be utilized for terrestrial applications, e.g. topography, vegetation canopy height and infrastructure surveys, as well as aquatic applications, e.g. bathymetry and coastal geomorphology.

Below is an overview of archives that contain lidar data products and resources:

CLICK (Center for LIDAR Information Coordination and Knowledge) provides links to different publically available USGS lidar resources, including EAARL, the National Elevation Dataset and EarthExplorer. The CLICK website also hosts a searchable database of lidar publications and an extensive list of links to relevant websites for companies and academic institutions using lidar data in their work.

EAARL (Experimental Advanced Airborne Research Lidar) is an airborne sensor system that has the capacity to seamlessly measure both submerged bathymetric surfaces and adjacent terrestrial topography. By selecting the “Data” tab on the EAARL website, and then following links to specific surveys, users can view acquisition areas using Google Maps and access data as ASCII xyz files, GeoTIFFs and LAS files (a standardized lidar data exchange format).

NED (National Elevation Dataset) is the USGS seamless elevation data product for the United States and its territorial islands. NED is compiled using the best available data for any given area, where the highest resolution and most accurate of which is derived from lidar data and digital photogrammetry. NED data are available through the National Map Viewer in a variety of formats, including ArcGRID, GeoTIFF, BIL and GridFloat. However, to access the actual lidar data, and not just the resulting integrated products, users need to visit EarthExplorer.

EarthExplorer is a consolidated data discovery portal for the USGS data archives, which includes airborne and satellite imagery, as well as various derived image products. EarthExplorer allows users to search by geographic area, date range, feature class and data type, and in most cases instantly download selected data. To access lidar data, which are provided as LAS files, simply select the lidar checkbox under the Data Sets tab as part of your search criteria.

JALBTCX (Joint Airborne Lidar Bathymetry Technical Center of Expertise) performs data collection and processing for the U.S. Army Corps of Engineers, the U.S. Naval Meteorology and Oceanography Command and NOAA. The JALBTCX website includes a list of relevant lidar publications, a description of the National Coastal Mapping Program, and a link to data access via NOAA’s Digital Coast.

Digital Coast is a service provided by NOAA’s Coastal Services Center that integrates coastal data accessibility with software tools, technology training and success stories. Of the 950 data layers currently listed in the Digital Coast archive, lidar data represents nearly half of the available products. Searching for lidar data can be achieved using the Data Access Viewer and selecting “elevation” as the data type in your search, or by following the “Data” tab on the Digital Coast home page and entering “lidar” for the search criteria. The data is available in a variety of data formats, including ASCII xyz, LAS, LAZ, GeoTIFF and ASCII Grid, among others.

NCALM (National Center for Airborne Laser Mapping) is a multi-university partnership funded by the U.S. National Science Foundation, whose mission is to provide community access to high quality lidar data and support scientific research in airborne laser mapping. Data is accessible through the OpenTopography portal, either in KML format for display in Google Earth, as pre-processed DEM products, or as lidar point clouds in ASCII and LAS formats.

Lidar can be useful on its own, e.g. topography and bathymetry, and can also be merged with other remote sensing data, such as multispectral and hyperspectral imagery, to provide valuable three-dimensional information as input for further analysis. For example, lidar derived bathymetry can be used as input to improve hyperspectral models of submerged environments in the coastal zone. There has also been more widespread use of full-waveform lidar, which provides increased capacity to discriminate surface characteristics and ground features, as well as increased use of lidar return intensity, which can be used to generate a grayscale image of the surface.

What is readily apparent is that as the technology continues to improve, data acquisition becomes more cost effective, and data availability increases, lidar will play an important role in more and more remote sensing investigations.

An important aspect of developing new algorithms or analysis techniques involves testing and validation. In remote sensing this is typically performed using an image with known characteristics, i.e. field measurements or other expert on-the-ground knowledge. However, obtaining or creating such a dataset can be challenging. As an alternative, many researchers have turned to synthetic data to address specific validation needs.

So what are the challenges behind using “real” data for validation? Let’s consider some of the common questions addressed through remote sensing, such as classifying images into categories describing the scene (e.g. forest, water, land, buildings, etc…) or identifying the presence of particular objects or materials (e.g. oil spill, active fire areas, coastal algae blooms, etc…). To validate these types of analyses, one needs knowledge of how much and where these materials are located in the given scene. While this can sometimes be discerned through experience and familiarity with the study area, in most cases this requires physically visiting the field and collecting measurements or observations of different representative points and areas throughout the scene. The resulting data is extremely useful for testing and validation, and recommended whenever feasible; however, conducting thorough field studies is not always practical, particularly when time and budget is limited.

Here we explore a few options that researchers use for creating synthetic images, from the simple to the complex:

A simple approach is to create an image with a grid of known values, or more specifically known spectra, where each cell in the grid represents a different material. Subsequent validation analysis can be used to confirm that a given methodology accurately categorizes each of the known materials. To add greater variability to this approach, different levels of noise can be added to the input spectra used to create the grid cells, or multiple spectra can be used to represent each of the materials. While seemingly simplistic, such grids can be useful for assessing fundamental algorithm performance.

The grid concept can be further extended to encompass significantly greater complexity, such as creating an image using a range of feasible parameter combinations. As an example from the field of coral reef remote sensing, a model can be used to simulate an image with various combinations of water depth, water properties, and habitat composition. If water depth is segmented into 10 discrete values, water properties are represented by 3 parameters, each with 5 discrete values, and habitat composition is depicted using just 3 categories (e.g. coral, algae and sand), this results in 3750 unique parameter combinations. Such an image can be used to test the ability of an algorithm to accurately retrieve each of these parameters under a variety of conditions.

To add more realism, it is also feasible to utilize a real image as the basis for creating a synthetic image. This becomes particularly important when there is a need to incorporate more realistic spatial and spectral variability in the analysis. From the field of spectral unmixing, for example, an endmember abundance map derived from a real image can be used to create a new image with a different set of endmembers. This maintains the spatial relationships present in the real image, while at the same time allowing flexibility in the spectral composition. The result is a synthetic image that can be used to test endmember extraction, spectral unmixing and other image classification techniques.

Another approach based on “real” imagery is the NIST Hyperspectral Image Projector (HIP), which is used to project realistic hyperspectral scenes for testing sensor performance. In other words, the HIP is used to generate and display synthetic images derived from real images. As with the above example, a real image is first decomposed into a set of representative endmember spectra and abundances. The HIP then uses a combination of spectral and spatial engines to project these same spectra and endmembers, thereby replicating the original scene. The intent here is not necessarily to use the synthetic data to validate image processing techniques, but rather to test sensor performance by differentiating environmental effects from sensor effects.

Even though it’s a powerful tool, keep in mind that synthetic data won’t solve all your validation needs. You still need to demonstrate that your algorithm works in the “real world”, so it’s best to also incorporate actual measured data in your analysis.

A gateway for the access and exchange of datasets, applications and knowledge.

A pathway for you to expand your impact and extend your community outreach.

A framework for the development and deployment of scientific applications.

A resource for obtaining and sharing geospatial datasets.

A mechanism for improved technology transfer.

A marketplace for scientific computing.

The initial HyPhoon release, coming soon in mid-2013, will focus on providing the community with free and open access to remote sensing datasets. This data will be available for the community to use in research projects, class assignments, algorithm development, application testing and validation, and in some cases also commercial applications. In other words, in the spirit of encouraging innovation, these datasets are offered as a community resource and open to your creativity. We look forward to seeing what you accomplish.

We’ll be announcing the official HyPhoon release here, so stay tuned to be the first to access the data as soon as it becomes available!

Our objective when developing these datasets has been to focus on quality rather than any predefined set of content requirements. Thus, dataset contents are variable. Many of the datasets include a combination of imagery, validation data, and example output. Some datasets include imagery of the same area acquired using different sensors, different resolutions, or different dates. And other datasets simply include unique image examples.

The datasets originate largely from the community itself. In some cases data also originates from public domain repositories as well as from commercial image providers. We are also interested in hearing your thoughts on new datasets that will benefit the community. Contact us with your ideas and if our review team approves the project then we will work with you to add your data to the gateway.

Beyond datasets, HyPhoon will also soon include a marketplace for community members to access advanced algorithms, and sell user-created applications. Are you a scientist with an innovative new algorithm? Are you a developer who can help transform research code into user applications? Are you working in the application domain and have ideas for algorithms that would benefit your work? Are you looking to reach a larger audience and expand your impact on the community? If so, we encourage you to get involved in our community.

Earlier this week, on a rainy night at the European Spaceport located along the tropical coast of French Guiana, the Vega launch vehicle successfully rocketed into space and completed its mission of deploying three new satellites into orbit.

This was just the second deployment of Vega, representing a momentous occasion for both the European Space Agency and Arianespace – the company operating the launch – and marking another significant step forward in the commercial transition of launch operations.

Also celebrating the Vega launch were the teams behind the three satellites deployed during the mission. These include:

Proba-V. From the European Space Agency, Proba-V was the primary payload of the Vega mission. The “V” stands for vegetation, and the satellite is designed as a follow-on mission to the vegetation imagers included on the French Spot-4 and -5 satellites. Proba-V contains a moderate-resolution four-band multispectral instrument capable of mapping complete global vegetation cover once every two days.

VNREDSat-1A. Representing the first Earth observing satellite from Vietnam, this is a high-resolution five-band imager (four multispectral bands and one panchromatic) designed for monitoring and managing natural resources, assessing the impacts of climate change, and improving response to natural disasters.

ESTCube-1. This represents the very first satellite from Estonia. ESTCube-1 is a CubeSat built primarily by students at the University of Tartu. Its main scientific objective is to deploy and test an electric solar wind sail, a novel method of space propulsion.

You may ask why the European Spaceport, aka Guiana Space Center, is located in the equatorial rainforest of South America, which upon first consideration may seem like an unlikely location. The answer is that the Spaceport’s location has some significant advantages. First and foremost, its location near the equator makes the Spaceport ideal for launching satellites into geosynchronous orbit, and given the higher rotational speed of the planet near the equator, this also lends efficiency to the launch process (i.e., saving fuel and money). Second, the region is relatively unpopulated and not at risk from earthquakes or hurricanes, thereby significantly reducing risk from any unforeseen disasters. The European Spaceport also has a rich launch history extending back nearly 50 years. Originally established by France in 1964, the Spaceport has been used by the European Space Agency since its founding in 1975.

With all this talk lately about new satellites, it may also seem like space is starting to get crowded. It is! The issue isn’t necessarily all the new satellites being launched, but rather all the derelict space debris that remains in orbit. To address this issue, there has been significant international discussion lately to develop debris removal plans. While such an endeavor is certainly going to be costly and logistically difficult, space cleanup is a necessary step towards ensuring the integrity of current and future satellites.

But for now let’s celebrate the success of these latest satellite missions and make sure the data is put to good use.