Category Archives: MapServices

Post navigation

The Multi-Resolution Land Characteristics Consortium, MRLC, is a consortium of agencies at the federal level that produces the National Land Cover Database, NLCD 2001. The dataset was developed using a national coverage set of 8 band Landsat-7 Imagery along with 30m DEM. The imagery is processed from three separate dates to give a seasonal average land cover classification. The resolution is a bit coarse at 30m square, but it is a valuable resource because of its consistent national coverage.

NLCD 2001 impervious surface: The impervious surface data classifies each pixel into 101 possible values (0% – 100%). The data show the detailed urban fabric in which many of us reside. Low percentage impervious is in light gray with increasing values depicted in darker gray and the highest value in pink and red. White areas have no impervious surface.

The data is available for public download. There is also one of those vintage ESRI viewers that qualifies for James Fee’s “Cash for Geo Clunkers” proposal. These Ancien Régime viewers are littered all over the federal landscape. It will take years for newer technology to replace this legacy of ArcIMS. Fortunately there is an exposed WMS service ( See GetCapabilities ), which permits access to MRLC layers without going through the “Viewer” nonsense. This WMS service proved very useful on a recent web project for Storm Runoff Mitigation.

I am no hydrologist, but once I was provided with the appropriate calculation approach the Silverlight UI was fairly straightforward. Basically there are a number of potential mitigation practices that play into a runoff calculation. Two fairly significant factors are Impervious Surface Area and Canopy Area, which are both available through MRLC’s WMS service. One simplified calculation model in use is called the TR-55 method.

By making use of the MRLC derived layers for Impervious Surface and Canopy at least approximations for these two factors can be derived for any given area of the Continental US. The method I used was to provide a GetMap request to the WMS service which then returned a pixel shaded image of the impervious surface density. Most of the hard work has already been done by MRLC. All I need to do is extract the value density for the returned png image.

Fig 3 – Impervious Surface shaded density red

Fig 4 – Canopy shaded density green

The density values are relative to gray. I at first tried a simple density calculation from the color encoded pixels by subtracting the base gray from the variable green: Green – Red = factor. The sum of these factors divided by the total pixel area of the image times the max 255 byte value is a rough calculation of the percentage canopy over the viewport. However, after pursuing the USGS for a few days I managed to get the actual percentile RGB tables and improve the density calculation accuracy. This average density percentile is then used in TR-55 as An*CNn with the Canopy CN value of 70.

C#, unlike Java, allows pointer arithmetic in compilation marked unsafe. The advantage of using this approach here is a tremendous speed increase. The array of imperviousRGB strings to percentiles was supplied by the USGS. This process is applied in a WCF service to both the Canopy and the Impervious Surface layers and the result passed back to the TR-55 calculations.

Possible Extensions:

There are several extensions beyond the scope of this project that could prove interesting.

First the NLCD uses a color classifications scheme. A similar color processing algorithm could be used to provide rough percentages of each of these classificcations for a viewport area. These could be helpful for various research and reporting requirements.

However, beyond simple rectangular viewports, a nice extension would be the ability to supply arbitrary polygonal area of interests. This is fairly easy to do in Silverlight. The draw command is just a series of point clicks that are added to a Path geometry as line segments. The resulting polygon is then used as a clip mask when passing through the GetMap image. Probably a very simple point in polygon check either coded manually or using one of the C# ports of JTS would provide reasonable performance.

Fig 3 - MRLC NLCD 2001 Colour Classification

What about resolution?

It is tempting to think a little bit about resolution. Looking at the MRLC image results, especially over a map base, it is obvious that at 100 ft resolution even the best of calculations are far from the fine grained detail necessary for accurate neighborhood calculations.

It is also obvious that Impervious Surface can be enhanced directly by applying some additional lookup from a road database. Using pavement estimates from a road network could improve resolution quality quite a bit in urbanized areas. But even that can be improved if there is some way to collect other common urban impervious surfaces such as rooftops, walkways, driveways, and parking areas.

NAIP 1m GSD 4 band imagery has fewer bands but higher resolution. NAIP is a resource that has been applied to unsupervised impervious surface extraction. However, the 4 band aquisition is still not consistently available for the entire US.

Now that more LiDAR data is coming on line at higher resolutions, why not use LiDAR classifications to enhance impervious surface?

Lidar All Elevation

LidarAll Classification

Lidar All Intensity

Lidar All Return

Just looking at the different style choices on the LidarServer WMS for instance, it appears that there are ways to get roof top and canopy out of LiDAR data. LiDAR at 1m resolution for metro areas could increase resolution for Canopy as well as rooftop contribution to Impervious Surface estimates.

In fact the folks at QCoherent have developed tools for classification and extraction of features like roof polygons. This extraction tool appled over a metro area could result in a useful rooftop polygon set. Once available in a spatial database these polygons can be furnished as an additional color filled tile pyramid layer. Availability of this layer would also let the Runoff calculation apply rooftop area estimates to roof drain disconnect factors.

Additionally improved accuracy of impervious surface calculations can be achieved by using a merging version of the simple color scan process. In a merging process the scan loop over the MRLC image does a lookup in the corresponding rooftop image. Any pixel positive for rooftop is promoted to the highest impervious surface classification. This estimate only applies so long as roof top green gardens remain insignificant.

Ultimately the MRLC will be looking at 1m GSD collection for NLCD with some combination of imagery like NAIP and DEM from LiDAR. However, it could be a few years before these high resolution resources are available consistently across the US.

Summary

The utility of WMS resources continues to grow as services become better known and tools for web applications improve. Other OWS services like WFS and WCS are following along behind, but show significant promise as well. The exposure of public data resource in some kind of OGC service should be mandatory at all government levels. The cost is not that significant compared to the cost effectiveness of cross agency, even cross domain, access to valuable data resources.

By using appropriate WMS tools like Geoserver and Geowebcache, vastly more efficient tile pyramids can become a part of any published WMS service layer. It takes a lot more storage so the improved performance may not be feasible for larger national and worldwide extents. However, in this Runoff Mitigation project, where view performance is less important, the OGC standard WMS GetMap requests proved to be quite useful for the TR-55 calculations and performance adequate.

Magnifiers are an interesting tool for some UIs. They function as a small area blowup of a larger Canvas. Here is a Magnifier example that Matt Serbinski put up on codeplex.

Fig 2 – Sample Magnify Tool

I decided to look at approaches to making magnifier tools for Map controls. There are a couple of ways available for magnifying a Map Control in Silverlight. The easiest is to simply increase the LocalOffsetZ value of the canvas or UIControl. The effect is to bring the Canvas closer in the viewport by moving it in the positive z direction of the PlaneProjection axis. This does magnify the current zoom level, but adds no new detail. As the LocalOffsetZ increases the features eventually become blurry.

In cases where the layers are MapTileLayer.TileSources, a better approach is to actually move the magnifier through the tile source pyramid. This provides additional detail at each zoom level. To make use of this approach a Lens needs to replicate a set of the main map tile layer sources. Here is an example of a KeyMap populated with three tile sources, OSM, TopOSM, and TopOSMContours, matching the layers of the Main Map Canvas.

As you can see from the above Canvas, these additional panels sit inside a 200 x 200 Grid element and make use of an EllipseGeometryClip mask to limit the view to a circular lens area. In addition this Grid is wrapped in a DragDropPanel so that it can be moved with mouse events against the backdrop of the MainMap. At initialization a LayoutUpdated event handler is added to the KeyDragPanel:KeyDragPanel.LayoutUpdated += KeyChange;

It takes the MainMap current mouse position as a Lat,Lon Location and synchronizes the LensMap.View to the that same Location. However, instead of simply synchronizing ZoomLevels the Lens ZoomLevel is increased by a lensZoom factor, which is initially set to 2. The lens magnification can be adjusted by changing this lensZoom factor with +/- buttons or MouseWheel events. Now the Lens is viewing the same location as the MainMap, but at a deeper more detailed level in the tile pyramid. With this relatively simple addition to a normal Map client we have a functional magnifier lens, which a user can drag around the map, showing tile source layers at a higher zoomlevel out of the source pyramid.

If you notice what happens when a lensZoom factor is subtracted rather than added, you have in essence a basic Key map. The focal length of the magnifier is reversed. Just to make this a little more interesting I have added both a lens and a key UserControl, but instead of making these two separate controls I decided to spice up the sample by making the lens into a flip animation. On one side is the magnifying lens and on the other side is the key map.

Actually two Storyboards, one for the front side first 90 degrees, and a second for the backside last 90 degrees. As the front Storyboard completes it triggers a switch from the front user control to the back. In this case from the LensPanel to the KeyPanel. Here is the Completed event handler:

Rather than using a Clear and Add for switching User Controls between Lens and Key, this event handler just toggles Visibility. Keeping both User Controls in the MainPage and switching with Visibility doesn’t add much to downloads while making the switch faster. After toggling Visibility the Completed event handler then finishes the flip through the last 90 degrees by triggering the stbfinish Storyboard Begin() method. The switch happens at 90 degrees because that is the angle of RotationX when the view is edge on, and the panel disappears momentarily.

Here is the Flip Animation sequence:

start the initial storyboard stb.Begin();

at the end of 90 degrees the Completed event handler is triggered

after toggling controls the Completed event triggers the final soryboard stbfinish.Begin()

finally the RotationX is completed and the flip animation is done.

Summary

Using PlaneProjection for a flip animation of the Lens and KeyMap tools is an interesting UI enhancement. Although this works very well for tile pyramid sources, it does have one drawback: It is limited to tile source layers. WMS rectangle sources require a new GetMap request with each change, which means that magnification of WMS layers will not occur until the lens stops moving. WMS GetMap requests are just not fast enough to keep up with a Lens drag. In the case of a set of WMS layers it is probably better to work out a LocalOffsetZ magnifier.

The performance and user experience improvements are so great with tile pyramid sources that I assume going forward these will be the services of choice. Tile pyramids are not hard to create and in the case of GeoServer geowebcache builds the pyramids for you. With the cost of disk space and servers falling all the time there really doesn’t seem to be much of a reason to continue using the slower WMS GetMap request for relatively static layers.

At the same time Silverlight vector performance has improved significantly over javascript. This means that dynamic geospatial sources might be better sourced as vector features either directly from the database or decoupled with WFS services. Unlike WMS services vector magnification can happen indefintely in the client.

Web service architectures will likely evolve toward static tile pyramids and dynamic vector layers. However, there will be older important WMS services for some time to come. It would be nice to see some of the federal web services add tile pyramid endpoints in the coming year. Maybe this is a good opportunity for using stimulus funding at USGS, MRLC, NASA, JPL, Census, NOAA ….. Cloud resources such as Amazon AWS and Microsoft Azure make high capacity public tile pyramids feasible even for smaller agencies.

Stacking maps is a pretty common experience. It goes back a long way with the now antique, but intriguing, use of clear mylars to portray various feature overlays. I don’t think I was the only kid fascinated with Encyclopedia Britannica’s plastic overlay illustrations. Even as a young engineer we used mylar overlays occasionally. It’s the same thing today but computers make virtual overlays, or layers, relatively easy. Stacking layers is a useful paradigm when using a single coordinated set of features that have a common origin, resolution, coordinate system, and order. However, now that we’ve gotten used to pulling maps from disparate parts of the internet, layer stacking is not always so simple. We are now stacking layers and then moving up the ontology stream to stack additional sets of layers, from other related maps.

Pulling maps from various WMS or WFS sources comes with a set of problems. Even after dealing with matching coordinate systems, resolution differences, and date currency issues, there is still order to take into account. This means choosing what should be at the bottom and what sits on top. The issues that inevitably come up are problems with masking. This is especially a problem when using multiple opaque WMS map sources. The user should have ultimate say as to what layer fits where in his view of his stack, but this flexibility is not always easy to provide.

When I started playing with this tile source, which was put together by Lars Ahlzen, I was unable to consistently get the tiles for the transportation feature layer. I didn’t solve that problem, but began thinking ,”why use another transportation feature layer when OpenStreetMap is readily available, also very attractive, and doubtless more up to date.” This presented the masking problem because tiles from Open Street Map are opaque and the base relief layer of TopOSM is also opaque. In WMS this is typically handled by setting a TRANSPARENT parameter true. The WMS server then produces a png image with the background invisible or transparent.

However, in a tile source this has to be done when creating the tiles. The decision is usually made based on the percentage of cover in a given layer. For example aerial or satellite imagery has 100% coverage and so TRANSPARENT = true is meaningless. Road layers, though, are relatively sparse and would generally benefit from transparency. In the case of TopOSM the base relief layer includes terrain shading with 100% coverage. Contours on the TopOSMContour layer are generally sparse and published as a transparent=true tile source. The OSM maps contain large polygonal color areas and some shading so the decision was evidently made to produce OSM tiles without transparent background. Unfortunately I have a conflict. How to use two opaque map layers.

First I went searching for an OSM tile source that had a transparent background. There seems to have been one, at least at one time, as indicated here by this WorldWind Wiki link. I wasn’t able to get this tile source to respond so perhaps it is now defunct. Of course you could get a copy of OSM and publish your own set of transparent=true tiles, but that is a lot of work for a weekend. I also ran across some ‘on the fly’ java transform approaches that turn a color range into a transparent png as the tiles are read. However, this approach would be a performance bottleneck, defeating the whole purpose of a tile service.

One workaround is to adjust opacity in the client view. Even though this can result in a dim or blurry layer, it lets multiple opaque layers coexists.

This actually works pretty well. I can stack two opaque layers from different map sources along with one additional transparent layer, the contours. I now have the beautiful relief and USGS topo contours, along with a very nice and up to date set of road features from OSM. I can change opacity of the two opaque tile sources to suit my need. If I want just the bare earth relief, I can set TopOSM opacity to one, but if I want to know where on earth I am, it helps to see the OSM transportation features. A nice compromise is opacity 0.5 for the top opaque map layer. I settled on using OSM as base and TopOSM as the opacity adjusted top map layer. With just two opaque sources this works OK, but what if you have several. Things get a bit misty with opacity on opacity. As we look forward to hyperspectral stacks with hundreds or thousands of layers in giant stacks, opacity won’t do the job.

Since Silverlight 3 introduced PlaneProjection properties, why not try something different. Here is a cool link that explains as well as illustrates what is going on with PlaneProjection. You can see from this post that it is relatively simple to manipulate plane projection parameters by binding slider values to axis of Rotation, GlobalOffset, and LocalOffset. So the question is can I do this with maps?

Here is an example of a MapTileLayer, and yes, it does include a Projection Property so now we can hook up all kinds of tools for manipulating maps out of plane.

It is interesting to play around with plane projection and move layers here and there, but as it turns out this is not all that useful for map portrayal. I guess the academics call this a “cartography” issue. The point is how usefully is information presented to a map user or the map consumer. The opacity approach, in my opinion, is by far the better way to use multiple opaque map sources in a cartographically meaningful manner, as long as there are just a few. However, It is still very interesting technically to play with PlaneProjection and see how well it is implemented, even with deeply nested controls.

Another use for PlaneProjection often seen now is the “Flip” animation. Basically it uses an animation tied to a layout control that flips the layout over using PlaneProjection Rotation. If the first layout is replaced by a new one, it reproduces the effect of turning pages. This is similar to the experience of paper pages in a book and could be applied to maps using an Atlas model. But it is not especially helpful since the point of using maps is usually not moving from paper to paper, but from space to space, and in our world all space happens to be contiguous. The physicists among us may beg to differ but cartography in Geospatial systems is all about solid earth surfaces.

However, aside from GIS, here is a small example of using an animated illustration along with a map. In this sample there is an animation with a “before” and “after” view that can be flipped with a simple “next” button. The novelty is a bit interesting …. if you have adequate hardware. However, this is a deeply nested, multi animated layout, and the flip sequence is not all that smooth especially on older hardware. You can see that the interior runoff animations do continue running even while the wrapping ProjectionPlane is rotating, which says a lot for the power of Microsoft’s implementation. Linear Algebra seems less abstract when you see it doing something right before your eyes.

Performance is a consideration and Silverlight 3 adds GPU acceleration. Here is a link describing the use of GPU acceleration in Silverlight. It seemed to help with the map plane projection example above, but GPU BitCaching is not suited for animation so it does nothing to help performance in my flip example.

Summary

Silverlight 3 PlaneProjection has some very nice features and warrants some more experimentation. However, it doesn’t add very much to the current state of slippy map tiled cartography. Working with many layers is another matter, and PlaneProjection could play a big part in UI development for hyperspectral sources.

Silverlight 3 PlaneProjection does add interest to ancillary map information. I’m not saying Metadata will be a great joy if we can virtually flip pages and pages of the stuff, but there are uses for out of plane media placed in a geographic context. If nothing else, it allows access to more information with less visual clutter. Maybe the Flickr photo pattern has been overdone, but some business workflow processes could actually benefit from media embedded usefully in a map surface.

However, you might not notice in addition it includes an explicit minor axis parameter. PARAMETER["semi_minor",6378137.0],
And this minor axis is identical to the major axis. The axis definition overrides the flattening in the Datum and is probably technically incorrect, but the idea was just to get a spherical mercator into a definition that people could use to match Google Maps. I’ve seen this definition in PostGIS, GeoServer, and OpenLayers.

I had already noticed this and played with a MercatorEllipsoid function to see if that would fix my problem. However, sadly, I made an error and miscalculated eccentricity. The real equation for e goes something like this: double f = 1 / 298.257223563;
e = Math.Sqrt(2*f – Math.Pow(f,2));
resulting in e = 0.081819190842621486;

Once I made the correction for the proper eccentricity in MercatorEllipsoid, ESRI:54004 lines up with the EPSG:3857. DIA is back in its rightful place.

My MercatorEllipsoid function now calculates correct BBOX parameters for GetMap requests, but only for com.esri.wms.Esrimap services. Looks like ESRI is the expert and correctly produces ESRI:54004 with Datum as defined. However, not so with GeoServer.

GeoServer seems to ignore the flattening 298.257223563 or else assume it is like the 900913 where flattening is overridden by a minor axis parameter:semi_minor axis = major.PARAMETER[semi_minor,6378137.0]

This leads to some problems. My WMS client now has to decide which service correctly interprets the DATUM on 54004. For now I just check for “com.esri.wms.Esrimap” in the WMS url and change datums accordingly. This will undoubtedly lead to problems with other services since I don’t yet know how MapServer or others treat 54004.

I gather that there is some twittering about neo versus paleo geography. See Peter Batty’s blog entry or James Fee’s blog. I myself don’t Twitter, but in general I’m happy for Peter’s paleo accomodation of the non twitterers, repeating the conversation in a blog entry. Peter has also updated comments with a new post questioning, “Are we now in a post neogeography era?” The dreaded paradigm shifts are coming fast and furiously.

I am not really able to comment on neo vs paleo as I myself fall even further back into “Jurassic Geography.” Looking at connotations we have this accounting:

Obviously the “paleo” label is not carried with quite the honor of “neo.” It’s reminiscent of the Galen / Myers-Brigg personality typology characterized as Lion, Otter, Beaver, and Golden Retriever. What do you want to be? Obviously not the beaver, but there has to be a significant part of the world in that category, like it or not. After all what would lions eat for dinner without a few of us beavers? Likewise there is a significant branch of paleo in the GIS kingdom.

However, in the pre-paleolithic era there are still a few of us left, falling into the “long tail” of the Jurassic. So carrying on down the connotation stream here is the Jurassic geography equivalent:

Wait “Open Source” – Jurassic Geography? How did that get in there? The notoriously frugal days of Hoover never made it into the paleolithic era’s “Supply Side” economy. It’s Keynesian economics all over the neo world, so Jurassic geography is the frugal end of the spectrum and how can you get more frugal than free! Obviously Open Source is as Jurassic as they come in Geography circles.

As I’ve recently been in a gig hunting mode, I’ve been having quite a few in depth conversations about GIS stacks. As a small businessman outside the corporate halls of paleo geography, I’ve had few occasions to get an in depth education on the corporate pricing world. So I spent the past couple of days looking into it.

Let’s start at the bottom of the stack. Here is some retail pricing on a few popular GIS databases:

Oracle Standard Spatial $17,500 + $3850 annual

Oracle Enterprise Locator $47,500 + $10,450 annual

SQL Server 2008 Web edition ~ $3500

PostgreSQL/PostGIS $0.00

If you’re a Jurassic geographer which do you choose? Probably not Oracle Enterprise Locator. If your Paleo you look at that and think, “Man, I am the negotiator! We don’t pay any of that retail stuff for the masses.” Neo? – well how would I know how a neo thinks?

OWS GeoServer or MapServer minimum $0 + annual $0
But, this is GIS isn’t it? We want some real analytic tools not just a few hundred spatial functions in JTS Topology suite. OK, better throw in a few QGIS or GRASS installs and add a few $0s to the desktop production. Oh, and cores, we need some, “make that a 16core 64 bit please” – plus $0.

I think you catch the Jurassic drift here. How about client side.

ESRI Silverlight free, well sort of , if you’re a developer, NGO, educational, or non-profit otherwise take a look at that ArcGIS license back a few lines.

Database:PostgreSQL/PostGIS – Standards compliant SQL server with GIST spatial indexing on OGC “Simple Features for SQL” specification compliant geometry with extended support for 3DZ, 3DM and 4D coordinates. A full set of roughly 175 geometry, management, and spatial functions. It supports almost all projections. All this and performance? maybe a little vague but not shabby:

“PostGIS users have compared performance with proprietary databases on massive spatial data sets and PostGIS comes out on top.”

“GeoServer is the reference implementation of the Open Geospatial Consortium (OGC) Web Feature Service (WFS) and Web Coverage Service (WCS) standards, as well as a high performance certified compliant Web Map Service (WMS). “

Summary:
Well in these economic times Jurassic may in fact meet Neo. The GIS world isn’t flat and Jurassic going right eventually meets Neo going left, sorry Paleos. Will Obama economics turn out to be Hooverian in the end? Who knows, but here’s a proposition for the Paleos:

Let me do a GIS distribution audit. If I can make a Jurassic GIS Stack do what the Paleo stack is currently providing, you get to keep those annual Paleo fees from here to recovery. How about it?

I put my name down for a possible unconference talk at Wherecamp5280 coming up in Aug. In case it interests folks at the unconference, I decided to put together some notes here on my latest little project.

Open source GIS stacks do a great job of exposing ordinary GIS functionality. For web applications they do as well as anything out there from Oracle, Microsoft, or even ESRI. I use PostGIS and GeoServer all the time. Now that Microsoft’s Silverlight MapControl is available the client side is also easy to create. Generally porting a project involves just a few steps:a. load all the shape files to PostGIS tables – ogr2ogr, shp2pgsqlb. create the WMS service layers in GeoServerc. create the client using, my preference at the moment, Silverlight MapControl

Step a. is fairly straightforward. There are a lot of good tools including shp2pgsql and ogr2ogr. Here are a couple of example loading commands:shp2pgsql -s 4326 -d -I CityLimits.shp citylimits | psql -h localhost -d City -U username

Loading a set of shape files is normally a matter of writing a batch script. If there are some complicating factors then it is also easy enough to create a simple custom Java or C# loader specific to a particular job. In a more involved ArcSDE environment, you could theoretically skip the intermediary .shp and use table dumps from the backing DB. The complication there is doing the due diligence to get the DB idiosynchrosies mapped from a commercial license DB to PostgreSQL, which is generally more standards compliant.

Step b. above is the hardest part at present. The GeoServer RESTful configuration service extension promises to make life a bit more developer friendly. At least you can build Post and Put tools to add layers and styles in your own program. But, the really sticky part is translating all of the numerous styling, legend, range,.. etc parameters from an ESRI mxd project to SLD (or in the case of TatukGIS ttkgp files). SLD is a powerful style description language which can do just about any kind of styling you can think of, but it seems to be lacking interactive tools. It will be nice to see layer sld editors with popup colors, selectable view ranges, filter query generators, captions, auto legend generator etc. Perhaps it already exists and I’ve just missed it?

At any rate to do this manually with a text editor is not impossible, but very tedious. It requires creating sets of FeatureTypeStyle Rules with filters, symbolizers, min and max scale denominators, labels etc. This can get quite involved. For example, layers containing roads with numerous classifications to arrange in the SLD require some fairly involved xml plus a lot of cut & paste. Putting together icon symbols can get tedious too. ESRI mxd, like most things ESRI, is not exactly open format, but some diligent folks have done a bit of Arc scripting. On my input stack is trying out ArcMap2SLD to see how effectively it can be used to create the SLD side of things.
·· ·mxd to SLD – ArcMap2SLD
Well I tried out the ArcMap2SLD for a Arc9.3 project but received an error in German so it’s manual mode for the present.

Once GeoServer FeatureTypes are configured for a WMS, step c. is just wrapping the WMS calls inside a Silverlight MapControl. There are two ways to approach this. First, the one I like, is using MapTileLayer.TileSources for the various layers. The tiles are sourced from the geowebcache, gwc, included with newer versions of GeoServer. A second approach is to create a GetMap WMS request with explicit BBox. They both work fine, I just kind of like the dynamic zoom in affect of MultiScale tile sources as they spiral down the quadkey.

Silverlight 3 is out now so I installed the latest and then ran the new Silverlight Client through the update process in VisualStudio. It worked without a problem. I also downloaded the latest version of Silverlight Toolkit. This tool kit has a lot of nice controls to put to use. Using a control is pretty simple, just reference in the library dlls and start adding controls:

Want to change the entire page style? Just add one of the dozen theme styles under root to wrap the whole page: <expressionDark:ExpressionDarkTheme>

It’s convenient to include a base map tile source using OSM, but if this is an offline situation you can use a road layer of your own. Those with need of a high level background map including aerial imagery can use the Bing Maps tile service, license required. Silverlight MapControl cannot quite be offline yet. Looking at the request response traffic with Fiddler shows a couple of requests to the Microsoft server. Since I turned off Bing Maps in this project it does nothing more than confirm that their server is online and has a crossdomain.xml. If you are offline it throws up a white box message indicating “service is unavailable. try again later.” The message unfortunately is sticky and stays with the viewport regardless of zooming panning around your local tile sources.

The custom code for Silverlight is easy to create using C# which I’m learning to really like. One great feature compared to Javascript client code is performance. I was amazed the first time I brought in a list of a thousand ellipse shapes, no change in zoom and pan performance at all.

I then added a few specific ‘Find’ and ‘Query’ functions using some of the spatial functions in PostGIS. Double click FeatureInfo requests drill through the visible layer stack creating an expandable tree view for table attribute fields.

The end result, for those for whom this matters, is a complete online map viewer for your ArcMap project minus any license cost or hassles.

One caveat I should mention, the Silverlight MapControl is still CTP. Final release ETA is unknown at present, so work is currently subject to change and revision once final release is out.

8. Add a line draw and profile view window to the LiDAR Silverlight client
a. try a Silverlight 3 Perspective 3d Transform to follow a profile around an arbitrary LiDAR profile polyline. This is a non standard extension to the WMS request spec, which allows a client to grab a profile view out of the point cloud. Pretty interesting stuff since you can see ghostly building point cloud shapes rising out of the terrain.

10. Find a gig – resume Unfortunately, my last line item needs to be moved up to the top of the input stack. My primary client was forced to drop their software division due to the economic climate and it looks like I will need to focus on a new gig … but there are so many other interesting things to try out . . . If anyone out there needs a GIS WEB developer please shoot me an email, thanks!

Would it be possible to provide an EBS volume with data already preloaded to PostGIS? A user could then attach the EBS ready to use. Adding a middle tier WMS/WFS like GeoServer or MapServer can tie together multiple PG sources, assuming you want to add other pg databases.

Jeffrey mentions one caveat about the 5GB S3 limit. Does this mark the high end of a snapshot requiring modularized splitting of OSM data? Doesn’t sound like S3 will be much help in the long run if OSM continues expansion.

What about OpenAerial? Got to have more room for OpenAerial and someday OpenTerrain(LiDAR)!
EBS – volumes from 1 GB to 1 TB. Do you need the snapshot (only 5GB) to start a new EBS? Can this accommodate OpenAerial tiles, OpenLiDAR X3D GeoElevationGrid LOD. Of course we want mix and match deployment in the Cloud.

Would it be posible for Amazon to just host the whole shebang? What do you think Werner?

Put it out there as an example of an Auto Scaling, Elastic Load Balancing OSM, OpenAerial tile pyramids as CloudFront Cache, OpenTerrain X3D GeoElevationGrid LOD stacks. OSM servers are small potatoes in comparison. I don’t think Amazon wants to be the Open Source Google, but with Google and Microsoft pushing into the Cloud game maybe Amazon could push back a little in the map end.

I can see GeoServer sitting in the middle of all this data delight handing out OSM to a tile client where it is stacked on OpenAerial, and draped onto OpenTerrain. Go Cloud, Go!

Geocode services are useful for spatializing many types of business data. There are numerous services to choose from, but often business data is riddled with mistypes and other issues that cause geocode failures. I was recently faced with a data set that contained nearly a 30% failure rate from a common geocode batch service and decided to see what I could do with Virtual Earth’s geocode service.

Using web.config for setup parameters

In the process I was faced with passing parameters from my Web.config into the project and down to my Silverlight xaml to be used in the Page.cs code behind. Silverlight has only indirect access to information in the appSettings of web.config. I eventually stumbled across this approach:

Web.config: add key value pair to the appSettings

Default.aspx.cs: Use Page_Load to get the parameters and pass to Xaml1.InitParameters

App.cs: use initParams to get the parameters and send them to the Page.cs Page() method

Page.cs: in the Page.cs Page method set class variables with the parameters

1. Web.config
The web.config file can be used to store parameters in the appSettings section. These parameters are then available as application wide settings. An additional advantage is the ability to change settings on the deploy server without a rebuild and redeploy each time.

2. Default.aspx.cs:
Use Default.aspx.cs Page_Load(object sender, EventArgs e) method to setup the commonService and get the client token: commonService.GetClientToken(tokenSpec); for accessing the VEGeocodeWebService. Also pull the “geocodeLimit parameter from the appSettings, ConfigurationManager.AppSettings["geocodeLimit"]. These two parameters, “clientToken” and “limit” are then setup as InitParameters for the asp:Silverlight Xaml page, ID=”Xaml1″. InitParameters is a string listing comma delimited key=value pairs. The end result is Xaml1.InitParameters = “clientToken=Your Token,limit=10″.

At this point my VE token and geocodeLimit are available for use locally in my codebehind for Page.xaml

AddressLookup
Once that is settled I can move on to an address lookup using VEGeocodingService. The basic approach as seen in the initial Page view is a TextBox for input, a TextBox for output, a small menu for selecting a confidence factor, and a button to start the geocode when everything is ready. Underneath these controls is a VirtualEarth.MapControl. Addresses can be copy/pasted from text, csv, or xls files into the input TextBox. After geocoding the output TextBox can be copy/pasted back to a text file.

Once the geocode Click event is activated the confidence selection is added as a new ConfidenceFilter(). Next a geocodeRequest is setup, new GeocodeRequest(), using the input TextBox addresses, which are either tab or comma delimited, one per line. In addition to the address, a table ID code is included to keep track of the address. This id is passed into the geocodeService.GeocodeAsync(geocodeRequest, id); so that I can use it in the callback to identify the result.

The callback, geocodeService_GeocodeCompleted(object sender, GeocodeCompletedEventArgs e), should have a GeocodeResponse with information about the matched addresses, any address modifications made by the parser, and the all important latitude, longitude. GeocodeCompletedEventArgs also holds the address id as a UserState object which I can cast to string.

The result may have several “finds” in its Results array, but I choose to just look at the first “find” returned for each address. By adding an ellipse to my VE MapControl and changing the VEMap.View I can examine the location of the “find” to see if it makes sense. Since I’m somewhat familiar with Denver, the idea is to look at a result and decide if it is junk or not, at least in the more egregious cases. The results are also added to the Output TextBox.

The ToolTipservice is a simple way to add rollover labelling to the address points, but perhaps a more complex rollover would be useful. Adding a couple of events, MouseEnter and MouseLeave, to the point shape allows any variety of rollover affect desired. Here is a simple set of events to change the fill brush:

The points are available for viewing as a shape layer on top of the VE MapControl, but what if I can see how I should change the lat,lon location to make it more accurate. What I need is a drag capability to pick an address point and drag it to a new location.

This was a little more complicated than the simple rollover event. To start with I add a MouseLeftButtonDown event to the address ellipse. Normally you would also have a MouseMove and MouseLeftButtonUp event to help with the drag. However, VE MapControl complicates this approach masking move events with its pan event.

The solution that I found is to temporarily overide the VEMap.MousePan and VEMap.MouseLeftButtonUp with methods specific to the address point. The new point_Pan method sets its MapMouseEventArgs “handled” property to “true”, preventing the event from bubbling up to the VEMap pan. The VEMap.ViewportPointToLocation is used to change the screen x,y point to a lat,lon location as the mouse moves across the map. Once the MouseLeftButtonUp fires we can drop our shape at the new location and handle some book keeping in the Output TextBox.

Leaving the fill color “Green” indicates that this point has been relocated.It is also important to remove the temporary Mouse event handlers so that Map pan will work again.

A method of passing Web.Config parameters in to the xaml code behind for a Silverlight MapControl project.
This is useful for deployment on multiple servers with differing needs as well as locating important credentials in a single place.

Setting up a VEGeocodingService and making use of resulting locations in a Silverlight MapControl. Geocoding is a common need and the VE geocoding appears to be adequate. I noticed some strange behaviors when compared to Google. I was especially concerned that the parser seems to modify the address until it finds something that works, in fact anything that works, even locations in other states! The only response indicator that something was being modified is the MatchCodes[]

Looking at some event driven interaction with new shape layer objects. Event driven interaction is a big plus, with granular events down to individual geometry shapes a map can become a live form for selection and edits. This is why I find Silverlight so attractive. It duplicates the event driven capability pioneered a decade ago in the SVG spec.

This is aimed at creating a national building inventory rather than a terrain model, but still very interesting. The interior/exterior ‘as built’ aspect has some novelty to it. Again virtual and real worlds appear to be on a collision trajectory and may soon overlap in many interesting ways. How to make use of a vast BIM modeling infrastructure is an interesting question. The evolution of GE and VE moves inexorably toward a browser virtual mirror of the real world. Obviously it is useful to first responders such as fire and swat teams, but there may be some additional ramifications. Once this data is available will it be public? If so how could it be adapted to internet use?

Until recently 3D modeling in browsers was fairly limited. Google and Microsoft have both recently provided useful api/sdk libraries embeddable inside a browser. The terrain model in GE and VE is vast but still relatively coarse and buildings were merely box facades in most cases. However, Johannes Kebek’s blog points to a recent VE update which re-enabled building model interiors, allowing cameras to float through interior space. Following a link from Kebek’s blog to Virtual Earth API Release Information, April 2009 reveals these little nuggets:

Digital Elevation Model data – The ability to supply custom DEM data, and have it automatically stitched into existing data.

Terrain Scaling – This works now.

Building Culling Value – Allows control of how many buildings are displayed, based on distance and size of the buildings.

Turn on/off street level navigation – Can turn off some of the special effects that occur when right next to the ground.

Both Google and Microsoft are furnishing modeling tools tied to their versions of virtual online worlds. Virtual Earth 3dvia technology preview appears to be Microsoft’s answer to Google Sketchup.
The race is on and shortly both will be views into a virtual world evolving along the lines of the gaming markets. But, outside of the big two, GE and VE, is there much hope of an Open Virtual World, OVM? Supposing this BIM data is available to the public is there likely to be an Open Street Map equivalent?

Fortunately GE and VE equivalent tools are available and evolving in the same direction. X3D is an interesting open standard scene graph schema awaiting better implementation. WPF is a Microsoft standard with some scene building capability which appears to be on track into Silverlight … eventually. NASA World Wind Java API lets developers build applet views and more complex Java Web Start applications which allow 3D visualizing. Lots of demos here: World Wind DemosBlender.org may be overkill for BIM, but is an amazing modeling tool and all open source.

LiDAR Terrain

Certainly BIM models will find their way into browsers, but there needs to be a corresponding evolution of terrain modeling. BIM modeling is in the sub foot resolution while terrain is still in the multimeter resolution. I am hopeful that there will also be a national resource of LiDAR terrain data in the sub meter resolution, but this announcement has no indication of that possibility.

I’m grappling with how to make use of the higher resolution in a web mapping app. GE doesn’t work well since you are only draping over their coarse terrain. Silverlight has no 3D yet. WPF could work for small areas like architectural site renderings, but it requires an xbap download similar to Java Web Start. X3D is interesting, but has no real browser presence. My ideal would be something like an X3D GeoElevationGrid LOD pyramid, which will not be available to browsers for some years yet. The new VE sdk release with its ability to add custom DEM looks very helpful. Of course 2D contours from LiDAR are a practical use in a web map environment until 3D makes more progress in the browser.

A national scale resource coming out of the “Stimulus” package for the US will obviously be absorbed into GE and VE terrain pyramids. But neither offers download access to the backing data, which is what the engineering world currently needs. What would be nice is a national coverage at some submeter resolution with the online tools to build selective areas with choice of projection, contour interval, tin, or raw point data. Besides, Architectural Engineering documents carry a legal burden that probably excludes them from online connectivity.

Not small figures for an Open Source community resource. Perhaps OVM wouldn’t have to actually host the data at community expense, if the government is kind enough to provide WCS exposure, OVM would only need to be another node in the process chain.

Further into the future, online virtual models of the real world appear helpful for robotic navigation. Having a mirrored model available short circuits the need to build a real time model in each autonomous robot. At some point a simple wireless connection provides all the information required for robot navigation in the real world with consequent simplification. This of course adds an unnerving twist to some of the old philosophical questions regarding perception. Does robotic perception based on a virtual model but disconnected from “the real” have something to say to the human experience?

One of the areas of interest using SimpleDB is the ability to add multiple attribute values. Here is the overview from Amazon’s Service Highlights.

“Flexible – With Amazon SimpleDB, it is not necessary to pre-define all of the data formats you will need to store; simply add new attributes to your Amazon SimpleDB data set when needed, and the system will automatically index your data accordingly. The ability to store structured data without first defining a schema provides developers with greater flexibility when building applications, and eliminates the need to re-factor an entire database as those applications evolve.”

As an extension to my previous blog post I decided to try adding alternative language names:

After adding several languages for ‘Albuquerque, New Mexico,’ I am able to display them as GeoRSS and then as tooltip text in the Virtual Earth API viewer.

I had to add an explicit character encoding to my http response like this:response.setCharacterEncoding(“UTF-8″);
Once that was done I could reliably get UTF-8 character strings in the GeoRSS xml returned to the viewer.

I am not multilingual, not even bilingual really, so where to go for alternate language translations? I had read about an interesting project over at Google: Language Tools

Here I could simply run a translate on my geoname for whatever languages are offered by Google. I cannot vouch for their accuracy, but I understand that Google has developed a statistically based language translation algorithm that can beat many if not all rule based algorithms. It was developed by applying statistical pattern processing to very large sets of “Rosetta stone” type documents, that had been previously translated. Because it is not rule based it avoids some of the early auto translation pit falls such as translating “hydraulic ram” as a “male water sheep.”

SimpleDB, with its free unstructured approach to adding attributes, let’s me add any number of additional alternateNames attributes in whatever language UTF-8 character set I wish.

Although this works nicely for point features, more complex spatial features are unsuited to SimpleDB. The limit of 256 attribute per item and 1024 byte per attribute precludes arbitrary length polyline or polygon geometry. Perhaps Amazon SimpleDB 2.0 will let attributes be arbitrary length, which means polyline and polygon geometries could be added along with a bbox for intersect queries.

Still it is an interesting approach for storing and viewing point data.