Wednesday, August 21, 2013

PostGIS 2.x (latest release, 2.1) enables users to do fairly sophisticated raster processing directly in a database. For many applications, these data can stay in the database; it's the insight into spatial phenomena that comes out. Sometimes, however, you need to get file data (e.g. a GeoTIFF) out of PostGIS. It isn't immediately obvious how to do this efficiently, despite the number of helpful functions that serialize a raster field to Well-Known Binary (WKB) or other "flat" formats.

Background

In particular, I recently needed to create a web service that delivers PostGIS raster outputs as file data. The queries that we needed to support were well suited for PostGIS and sometimes one query would consume another (one or more) as subquer(ies). These and other considerations led me to decide to implement the service layer in Python using either GeoDjango or GeoAlchemy. More on that later. Suffice to say, a robust and stable solution for exporting and attaching file data from PostGIS to an HTTP response was needed. I found at least six (6) different ways of doing this; there may be more:

Export an ASCII Grid

This works great! Because an ASCII grid file (or "ESRI Grid" file, with the *.asc or *.grd extension, typically) is just plain text, you can directly export it from the database. The GDAL driver name is "AAIGrid" which should be the second argument to ST_AsGDALRaster(). Be sure to remove the column header from your export (see image below). However, what you get is a file that has no projection information that you may need to convert to another format. This can present problems for your workflow, especially if you're trying to automate the production of raster files, say, through a web API.

Connect Using the QGIS Desktop Client

There is a plug-in for QGIS that promises to allow you to load raster data from PostGIS directly into a QGIS workspace. I used the Plugins Manager ("Plugins" > "Fetch Python Plugins...") in QGIS to get this plug-in package. The first time I selected the "Load PostGIS Raster to QGIS" plug-in and tried to install it, I found that I couldn't write to the plug-ins directory (this with a relatively fresh installation of QGIS). After creating and setting myself as the owner of the python/plugins directory, I was able to install the plug-in without any further trouble. Connecting to the database and viewing the available relations was also no trouble at all. One minor irritation is that you need to enter your password every time the plug-in interfaces with the database, which can be very often, at every time the list of available relations needs to be updated.

You'll be doing this a lot.

There are a few options available to you in displaying raster data from the database: "Read table's vector representation," "Read one table as a raster," "Read one row as a raster," or "Read the dataset as a raster." It's not clear what the second and last choices are, but "Reading the table as a raster" did not work for me where my table has one raster field and a couple of non-raster, non-geometry/geography fields; QGIS hung for a few seconds then said it "Could not load PG..." Reading one row worked, however, you have to select the row by its primary key (or row number in a random selection, not sure which it is returning). This may not be the easiest way for you to find a particular raster image in a table.

Using the COPY Declaration in SQL

My colleague suggested this method, demonstrated in Python, which requires the pygresql module to be installed; easy enough with pip:

pip install psycopg2 pygresql

The basic idea is to use the COPY declaration in SQL to export the raster to a hexadecimal file, then to convert that file to a binary file using xxd:

This needs to be done on the file system of the database server, which is where PostgreSQL will write.

Using an Output Function and Serializing from a Byte Array

Despite the seeming complexity of this option (then again, compare it to the above), I think it is the most flexible approach. I'll provide two examples here, with code: using GeoDjango to execute a raw query and using GeoAlchemy2's object-relational model to execute the query. Finally, I'll show an example of writing the output to a file or to a Django HttpResponse() instance.

Using GeoDjango

First, some setup. We'll define a RasterQuery class to help with handling the details. While a new class isn't exactly an idiomatic example, I'm hoping it will succinctly illustrate the considerations involved in using performing raw SQL queries with Django.

Seem simple enough? To write to a file instead, see the write_all() method of the RasterQuery class. The query.fetch_all()[0] at the end is contrived. I'll show a better way of getting to a nested buffer in the next example.

Here we see a better way of getting at a nested buffer. If we wanted all of the rasters that were returned (all of the buffers), we could call ST_Union on our final raster selection before passing it to ST_AsGDALRaster.

After considering all my (apparent) options, I found this last technique, using the PostGIS raster output function(s) and writing the byte array to a file-attachment in an HTTP response, to be best suited for my application. I'd be interested in hearing about other techniques not described here.

Wednesday, July 10, 2013

Some attention was given to Google's new Landsat Time Viewer, part of Google Earth Engine. The tool offers users the ability to look at the Earth's surface as a time series from 1984 to 2012 using their own web browser and Google Earth Engine's massive data catalog and rendering capabilities. Some of the example areas-of-interest are very compelling: Amazon deforestation, the drying of the Aral Sea, and urban growth in Las Vegas, Nevada. Now, you can embed a viewer for any part of the Earth's surface on your own website!

Thursday, June 27, 2013

MichiganView recently
collaborated with 5th grade teachers from Wixom Elementary School in
Wixom, MI to implement a program designed to teach young students about remote
sensing and wetland science. The program, which is a part of the MichiganView
project, was headed by Dr. Nancy French and Laura Bourgeau-Chavez.Keeping in line with project goals, the
study focused on wetland mapping and highlighted the importance of detecting
invasive species such as Phragmites Australis.

Students learned about various remotely sensed data in the
classroom before participating in hands on data collection in the nearby Wixom
Habitat Park. The field trip was led by three
Wixom Elementary teachers (Cathy Russel, David Blatt, and David Walczyk) and two
MTRI employees (Anthony Russel and Michael Battaglia). An additional team of classroom
aides and parent volunteers also accompanied to provide assistance.

The students were able to compare what they saw from the air (remote sensing imagery) to what they saw on the ground, or as their teachers called it frog’s-eye and
birds-eye’s view, to better understand what they were observing in the field. The
students were instructed on how to take GPS points, find the average height of
the vegetation, determine the vegetation density, and take photos of the area. The
kids then learned how to properly record data on specially designed field
sheets which included numerous multiple choice questions to help them make
decisions.

Overall the project was a success with the students being
able to take six total points of various coverage types ranging from emergent
wetland to forest. This was an important first step into our end goal of
creating “citizen scientists” to enhance the Great Lakes wetland mapping
project.

This study was completed on June 3rd, 2013 at the
Wixom Habitat Park in Wixom, MI. The teachers at Wixom Elementary would like to thank the Walled Lake Schools Foundation for Excellence for providing a grant that allowed the Wixom teachers to purchase GPS’s, cameras, rubber boots, and tape measures for the students to use in their data collection. Read more about MTRI’s involvement in the
MichiganView project here.

Wednesday, May 29, 2013

On May 30, 2013 the US Geological Survey will be assuming operation of the LDCM mission from the National Aeronautics and Space Administration. Operation includes collecting, archiving, processing, and distributing data products from Landsat 8, continuing the 40 year legacy of the Landsat Project. The ceremony will be held at the USGS EROS Center in Sioux Falls, South Dakota. Dignitaries from NASA, USGS, and national and local elected officials will be attending.

Representing AmericaView at the celebration will be Rebecca Dodge (TexasView), Mary O’Neill (South DakotaView), and Brent Yantis (LouisianaView). Sam Batzli (WisconsinView) has been selected as a member of the 30-member Social Media group that will be assisting USGS-NASA spread the word about the event via social media. Check out Sam’s website to view his Tweets from the event. Ramesh Sivanpillai (WyomingView) will be updating his WyomingView Facebook and Google+ pages as more information about the ceremonies become available.

Landsat 8 Products and Availability

After the ceremonies, data from the Landsat 8 satellite will be available to all users. Each day, 400 or more scenes acquired by the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) will be archived at the USGS EROS Center, and will be processed to be consistent with current standard Landsat data products. Data will be ready to download within 24 hours of reception.

Thursday, April 4, 2013

As VermontView Director Jaralth O’Neil-Dunne mentioned in the March 29th AmericaView post, LDCM / Landsat 8 is in orbit and being prepped to acquire data, carrying on the record-setting Landsat legacy. Knowing that this blog is read by numerous early adopters, it seems apropos to help distribute far and wide a link to a sample of the data recently made available by the good folks at USGS Land Remote Sensing. The image (shown below) was acquired on March 18th, 2013 and is provided for those of us who’d like a first look at the sensor’s 16-bit data. Check it out and spread the word. Feel free to comment and let us know what you think. We’ll pass your thoughts along to the USGS. They’d appreciate the feedback. Alternatively, send them your thoughts directly from here.

NOTE: This sample image is considered Engineering data, which implies that the data do not meet the exact specification as when the Landsat Data Continuity Mission (LDCM) is declared operational in May. More information regarding caveats is available at the USGS sample data post here.

Friday, March 29, 2013

Landsat 8 launched successfully and is orbiting the earth, so it's time to get ready to work with all this cool data! I need to acknowledge that at the time this blog post was written Landsat 8 is not officially "Landsat 8," it is called the "Landsat Data Continuity Mission (LDCM)." Once testing is complete (scheduled for May 30th), NASA will pass control to USGS and we will have an "operational" satellite whose official name is "Landsat 8!"

From an end user's perspective one of the most important things to keep in mind is that the band configuration for Landsat 8 (LDCM) is different from the Landat TM and ETM+ missions as Landsat 8 added two new bands. Want to make a natural color composite from Landsat 8? It will now be R-G-B = 4-3-2 (for Landsat 4,5, and 7 it was 3-2-1). The USGS has a very helpful FAQ page listing the band numbers and wavelengths for all the Landsat missions. There is another handy USGS page that displays the band combinations of Landsat 8 to Landsat 7 (see graphic below)

Source: NASA

Landsat 8 will be 12-bit data, compared to the 8-bit data used for previous Landsat missions. Pixel values will now range from 0-4095 (4096 possible values) for Landsat 8, as compared to 0-255 (256 possible values) for the previous missions. This should make it easier to distinguish features that are spectrally similar and shadow penetration will be much improved.

Because Landsat 8 is not yet officially operational you cannot download scenes from sites like GloVis just yet, but you can download a sample dataset to play around with courtesy of the USGS.

Tuesday, February 26, 2013

According
to the American Society of Civil Engineers (ASCE), more
than 26% of the nation’s bridges in 2009 were classified as either
structurally deficient or functionally obsolete. Two years later,
structurally deficient or functionally obsolete bridges still
made up close to 24% of the nation's total bridge infrastructure.
A report by the Federal Highway Administration (FHWA) indicates that,
given more time and funding to complete bridge inspections, the use
of non-destructive evaluation (NDE) methods would increase among
state and county transportation agencies (Highway
Bridge Inspection: State-of-the-Practice Survey,
2001). NDE promises a way to enhance the allocation of funding by
improving the information these decisions are based on and by
improving the assessment of existing bridge conditions and through
increased safety of inspection crews, reducing traffic disruption,
and increasing the frequency, objectivity, and accuracy of bridge
condition assessment.

As
part of research funded by USDOT-RITA, the 3D Optical Bridge
Evaluation System (3DOBS) was developed to quickly assess the
condition of bridges while minimizing traffic disruptions and
limiting inspection crews' exposure to traffic. The system is
composed of a Digital Single Lens Reflex (DSLR) camera mounted on a
truck, close range photogrammetry software (Agisoft PhotoScan Pro)
and an automated spall detection algorithm. For close range
photogrammetry to be achieved, the photos need to be collected with
at least 60% overlap. Early testing of the photogrammetry software
showed that collecting imagery with greater overlap produced better
results.

Prior
to the collection of photos, the bridges had to be marked with
reference points for Agisoft to set up a coordinate system and to
create a DEM. These reference points were marked duct tape that was
placed on the bridge deck in a grid pattern. The tape was placed at
four foot intervals in the transverse direction and at ten foot
intervals in the longitudinal direction. Carrier phase GPS points
were collected with a Trimble GPS (with an accuracy of <1m) at
each of the four corners of the bridge deck and at various other
points on the deck to be able to correctly spatially reference our
data.

For
the collection of the photos, a standard consumer grade Nikon D5000
DSLR with a resolution of 12.3 megapixels (MP) and a 27 mm focal
length lens were used. In order to capture a full lane in one pass
the camera needed to be mounted 9 ft above the bridge deck. In order
to achieve this height, a wooden vehicle mount was constructed to fit
into the bed of a standard pickup truck. During field collections a
control board was programmed to trigger the camera shutter at a rate
of one image per second. With the camera mounted, the truck was
driven across the bridge deck at a speed of about 2 mph. This speed
ensured that images were captured with the required 60% overlap
between the photos.

After
the photos were collected they were processed in Agisoft PhotoScan
Pro. This process was mostly automated as the software aligned the
photos and generated a 3D model without any user input. After the
model was generated it was necessary to manually add "KeyPoints"
to mark the location of each one of the duct tape markers with the
latitude and longitude coordinates. This allowed PhotoScan to set up
a coordinate system and to accurately reference the model and create
a Digital Elevation Model (DEM). The DEMs that were generated have a
resolution of 5 mm in the x, y directions and a z resolution of 2 mm.

The
spall detection algorithm was written in Python programming language
and uses ArcPy to interface with ArcGIS and utilize some of ESRI’s
available geospatial tools. The tool used to detect spalls is called
Focal Statistics analyzes each cell in the raster and calculates
statistics based on a specified neighborhood of cells around it.
Additional functionality was added so that the user can remove bridge
joints by creating a shapefile that defines the bridge joints. Spalls
can also be identified based upon their area. This feature allows for
the detected spalls to correspond to minimum size definitions and
allow for the removal of small artifacts in the DEM. The data
processing is automated, and only requires the user to set the
working directory, file names for the DEM and bridge joint shapefile,
focal statistics sensitivity and the minimum spall size.

Enhancements
are currently being made to this system through a project sponsored
by the Michigan Department of Transportation (MDOT). These
enhancements will include improving the camera so that the system
would be able to operate at near highway speeds (40 mph) and the
construction of a more sturdy vehicle mount. The spall detection
algorithm will also be improved. The current version of the algorithm
simply looks for a change in the elevation and therefore it also
detects the edges of patches on the bridge and reports them as
spalls. This will be changed so that the algorithm will only detect
negative changes or those that represent spalls on the bridge deck.

This
work is supported as part of a larger program (Bridge Condition
Assessment Using Remote Sensors) sponsored by the Commercial Remote
Sensing and Spatial Information program of the Research and
Innovative Technology Administration (RITA), U.S. Department of
Transportation (USDOT), Cooperative Agreement # DTOS59-10-H-00001,
with additional support provided by the Michigan Department of
Transportation, the Michigan Tech Transportation Institute, the
Michigan Tech Research Institute, and the Center for Automotive
Research. The views, opinions, findings, and conclusions reflected in
this paper are the responsibility of the authors only and do not
represent the official policy or position of the USDOT/RITA, or any
state or other entity. Further information regarding remote sensing
technologies and the decision support system for bridge condition
assessment and about this project can be found at
<http://www.mtri.org/bridgecondition>.

Monday, February 11, 2013

I'm guessing that most people who work in the applied natural sciences would agree on the need for synoptic, multitemporal, objective, easily and freely accessible data of the Earth's surface.

Without argument, no Earth observing system meets that need as well as has the Landsat Program. Providing the longest, most comprehensive record of Earth surface changes, Landsat is unprecedented.

The next generation of Landsat systems, the Landsat Data Continuity Mission aka Landsat 8, is scheduled for launch today from Vandenberg AFB on the coast of southcentral California. Members of the AmericaView consortium work daily with Landsat data though our 300+ academic, agency, non-profit, and industry partners. Below, in no particular order, is a sample of what Landsat means to AV and our partners to both demonstrate the breadth of applications and to celebrate today's launch.

Landsat imagery extends human vision to see our Earth’s surface not just over previous years, but over previous decades - Dr. Jim Campbell, Virginia Tech / VirginiaViewLandsat: still the premiere moderate resolution terrestrial imaging program after 41 years - Dr. Tim Warner, West Virginia University / West VirginiaViewLandsat, the first and best satellite sensing system for mapping, monitoring and analysis of land and water resources over time and space (and my favorite system for the past 40 years) - Dr. Marvin Bauer, University of Minnesota / MinnesotaViewWe could not have fulfilled our Legislative mandates to assess the quality of all lakes in the state without your Landsat remote sensing technologies. These data are being used for a wide variety of water quality trend detection, impairment evaluations and watershed management actions -Bruce Wilson, senior scientist, Minnesota Pollution Control Agency / MinnesotaViewFrom the early 1970s to the present, Landsat satellite imagery has been used to create four comprehensive land cover maps of Kansas – Landsat’s spectral capabilities, spatial resolution, and repeat coverage have made it an ideal resource for studying the Kansas landscape - Dr. Steve Egbert, University of Kansas / KansasView

The Landsat image archive stretches back over 40 years and covers the entire globe: nothing else even comes close - Kevin Dobbs, University of Kansas / KansasView

Creeping landcover changes, invisible from the ground, suddenly revealed in their full extent and proximity – shocking! - Dr. Rebecca Dodge, Midland State University / TexasViewLandsat: My magic carpet ride to see the wonders of Planet Earth - Teresa Howard, The University of Texas at Austin / TexasViewGraduate and undergraduate students on our campus use Landsat data in their research and training each year – to date nearly 400 UAF students directly benefited from free Landsat data - Dr. Anupma Prakash, University of Alaska Fairbanks / AlaskaView

Landsat has been instrumental in helping the state of Alabama monitor land use and associated impacts on its many natural resources - Dr. Luke Marzen, Auburn University / AlabamaView

With the frequent synoptic views of California agriculture provided by Landsat, we have deepened our understanding of the relationship of phenology and crop production across the state -Pia van Benthem, UC Davis / CaliforniaViewLandsat is the only source for historic time-series data for my study site - Dr. Teki Sankey, Idaho State University / IdahoViewAll current coastal land loss work in Louisiana is Landsat TM based - it's the heart of the Coastwide Reference Monitoring System (CRMS) landscape level monitoring effort - Brent Yantis, University of Louisiana Lafayette / LouisianaView

Landsat imagery is simply the best available data for studying the impacts of pervasive flooding on agriculture in the Devils Lake Basin of North Dakota, and an ideal tool for teaching about remote sensing – it is the “go-to” data source for most students working on projects for my remote sensing courses - Dr. Brad Runquist, University of North Dakota / North DakotaView

Freely available Landsat data has enabled 27 students at the University of Toledo to complete their masters and PhD degrees, and many of these students have gone on to work for local and state governments, the National Guard, and the National Geospatial Intelligence Agency - Dr. Kevin Czajkowski, University of Toledo / OhioViewWe have 30 years of Landsat imagery for our area, available for a wide range of applications - that is unparalleled accessibility - Dr. Pete Clapham, Cleveland State University / OhioViewFreely available Landsat data has allowed college students to not only better understand remote sensing but the world around them - Dr. Tom Mueller, California University of Pennsylvania / PennsylvaniaViewSouth Dakota farmers have found Landsat imagery to be of great value for precision agriculture, especially for purposes such as delineating management zones within a field -Mary O’Neill, University of South Dakota / South DakotaView

Thursday, January 31, 2013

Thanks to an AmericaView mini-grant and the labor of some intrepid students we were able to do a detailed damage assessment of some of the areas in Vermont hardest hit by Hurricane Irene back in 2011. The pre-event imagery was sourced from the National Agricultural Imagery Program and we obtained post-event WorldView-2 imagery from the USGS Hazard Data Distribution System (HDDS). Below are some key take-away points for this type of work.

It's all about the resolution. As much as I love Landsat you really need high-resolution imagery, such as WorldView2, to see the type of damage causes by an Irene-type event.

Manual image interpretation. While we would like to think that all we need is imagery from two different dates and a press of the Staples Easy Button to get results, the reality is much more complex. This is particularly true in post-event disaster response. Automated detection may help highlight areas of change, but chances are you will have to do it the old way to precisely quantify the damage. No one beats a human when it comes to high-resolution image analysis.

Georegistration is a challenge. Much of the imagery that was acquired post-Irene was done so at rather extreme look angles. Orthorectificaton did not yield promising results and it was time consuming. Thus, all the damage mapping was done on the more accurate pre-event imagery through good old fashioned terrain association. The offset is shown quite clearly in the above graphic.

Come up with a damage class domain. Lots of interpreters may be working on the same project and thus it really helps if you can come up with a list of damage types so that all of the datasets are consistent.

Work locally. The cloud and networks tend not to fare too well during disasters. Digitizing goes a heck of a lot faster when the imagery is stored locally.

Geospatial data, particularly imagery, are crucial during disaster response. The USGS has a wonderful web-based application called the Hazard Data Distribution System (HDDS) that provides the capability to download incident related imagery. This video provides a short overview of HDDS, demonstrating how to you can use HDDS to locate and download imagery for your area of interest. Please note that if you are involved in disaster response you will need to request permissions from the USGS to obtain access to the licensed imagery. For more info on HDDS please read the help pages.

Tuesday, January 29, 2013

Landspotting (www.landspotting.org), a recently released game for the iPad, allows players to characterize the landscape into simple and general land cover types (e.g. Urban, Trees, Grass, Water, Snow and Ice, Unknown, etc.) using their fingers to paint on high resolution satellite imagery. It is a tower-defense-game whereby players earn coins to buy new buildings, more warriors, and resources. The better the player paints, the more houses added to your village and the more warriors you have to protect against invaders.

However, the underlying goal of this game is crowd-sourcing the validation of global land cover types in order to improve global satellite-based land cover maps and products. The game was developed in cooperation with the Geo-Wiki.org project (http://www.geo-wiki.org). The game can be downloaded from iTunes.

Thursday, January 24, 2013

Earth’s land cover is changing at an increasingly rapid rate, and these changes have dramatic impacts on our lives whether we know about them or not. Scientists use a variety of instruments to observe and quantify the changes in an ongoing attempt to better understand their ramifications on society, and on the ecosystems upon which we all depend.

Dating back to 1972 and free to the public, the Landsat series of Earth observing satellites offers the longest and most comprehensive data set of the Earth’s surface from space. The future of the program, the Landsat Data Continuity Mission (LDCM), is a collaboration between NASA and the U.S. Geological Survey (USGS) and is ready to take the next big step. On February 11th, 2013, Landsat 8 is scheduled to launch from Vandenberg Air Force base in California aboard an Atlas V rocket. To learn more about the LDCM, NASA has developed a video available here.

NASA and the USGS have been working on the LDCM for years. To celebrate the LDCM and raise awareness of the many benefits of this increasingly important civilian land imaging satellite, the Landsat Education and Public Outreach Team has put together a website of materials that people at universities, museums, community centers, or anywhere could use to hold their own launch party. They invite us all to participate in this exciting and historic milestone in humanity's efforts to make our Earth more livable and sustainable. You can join others around the world in celebration of this much-anticipated event by hosting a launch party! Planning and hosting your own launch party with NASA resources is fun and easy, and it's a wonderful way to engage your community in your interests and the work you do.

Everything you need to host a great party and join in the launch fun is at your fingertips, right here.

You will find activities and decorations to make your party fun for all ages. You'll be able to watch the launch and associated events live, including talks from NASA and USGS scientists and engineers.

Enjoy the celebration of the LDCM, and please pass this information on to others.

Follow by Email

Blog Feeds

About the Authors

These articles are authored by members of AmericaView, a nationwide program that focuses on satellite remote sensing data and technologies in support of applied research, K-16 education, workforce development, and technology transfer. If you have comments or questions, please contact us.

If you are affiliated with AmericaView and would like to contribute to the blog, read this post and follow the instructions.