Most of the migration is fairly straightforward and the performance boost is quite significant. Large numbers of points, polylines, and complex polygons can now be rendered without impact on map navigation performance. There are many improvements to the v8 API including spatial geometry functions discussed in a previous post, Spatial to the Browser.

However, I ran into a problem with the Microsoft.Maps.TileLayer. It seems that the way Bing Maps v8 handles tiles causes CORS error for sporadic tiles in Chrome, Edge, and Firefox, but not IE. In the sample below png image tiles were stored in Azure Blob storage.

Some tile requests are showing a (canceled) status with a CORS policy violation indicated:

Access to Image at ‘http://onterratest.blob.core.windows.net/bingtiles2/county/02301.png’ from origin ‘http://onterrawms.blob.core.windows.net’ has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘http://onterrawms.blob.core.windows.net’ is therefore not allowed access.

“Cross-origin resource sharing (CORS) is a mechanism that allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served.”

CORS is a useful mechanism for allowing cross browser access. CORS AllowedOrigins shouldn’t be necessary for images such as png or jpeg, but for the current Bing Maps v8 TileLayer API there are problems with popular browsers: Chrome, EDGE, and Firefox that can be resolved by setting a CORS policy in the blob container.

I first noticed migration of spatial functions out to the browser back in 2014 with Morgan Herlocker’s github project turf.js, ref Territorial Turf blog.

Bing Maps version 8 SDK for web applications, released last summer, follows this trend, adding a number of useful modules that previously required custom programming or at least modification of open source projects.

Among the many useful modules published with this version is Microsoft.Maps.SpatialMath. There are 25 geometry functions in this release for such things as intersection, buffer, convex hull, distance, and many others. Leveraging these geometry functions lets us move analytic functions from a SQL backend or C# .Net service layer back out front to the user’s browser.

Some useful side effects of this migration for projects using jsonp services include:

As an example of this approach let’s look at the extensive GIS services exposed by the District of Columbia, DCGIS_Apps and DCGIS_Data. In addition to visualizing some layers it would be useful to detect parcels with certain spatial attributes such as distance to street access and area filters. With this ability Alley isolated parcels can be highlighted as potential alley development properties.

The algorithm for discovering interior parcels is greatly simplified using lodash and the new Bing Maps v8 spatial geometry functions. Notice that Geometry.distance handles a shortest distance calculation between a property polygon and an array of polylines, “allstreets.”

Function findInteriors checks for shortest distance in feet for each parcel against every street centerline. The filter then checks for shortest distances falling between 100 ft and 1000 ft and parcel area sf > 450. Parcels meeting this filter criteria are set to fill red.

The Bing spatial distance function is using the higher accuracy Vincenty’s algorithm, but still performs reasonably well. Experiments using the less accurate Haversine option showed no significant performance difference in this case.

DC GIS Services limit the number of features in a request result to a maximum of 1000. At zoom level 18 the viewport always returns less than this maximum feature limit, but lower zooms can hit this limit and fail to return all parcels in the view. A warning is triggered when feature counts hit 1000 since the interior parcel algorithm will then have an incomplete result.

Summary

The new Bing Maps v8 adds a great number of features simplifying web mapping app development. In addition Bing Maps v8 improves performance by making better use of html5 canvas and immediate mode graphics. This means a larger number of features can be added to a map before map navigation begins to slow. I was able to test with up to 25000 features with no significant problems using map zoom and pan navigation.

Bing Maps SpatialMath module provides many useful spatial algorithms that previously required a server and/or SQL backend. The result is simpler web map applications that can be hosted directly in Azure blob storage.

Is Heidegger serious or just funn’n us, when his discursive rambling winds past the abolition of all distances, wanders around thinginess, and leads us to “some-thing” from “no-thing?”

“The failure of nearness to materialize in consequence of the ‘abolition of all distances has brought the distanceless to dominance. In the default of nearness the thing remains annihilated as / a thing in our sense. But when and in what way do things exist as things? This is the question we raise in the midst of the dominance of the distanceless.”

“The emptiness, the void, is what does the vessel’s holding. The empty space, this nothing of the jug, is what the jug is as the holding vessel.”

“The jug’s essential nature, its presencing, so experienced and thought of in these terms, is what we call thing.”

So class, we may conclude that our spatial attribute is not the essence of the thing. However, IoT does not concern itself with das Ding an sich, but with the mechanism of appearance, or how “noumenon” communicates “phenomenon” within the internet. Therefore, we must suppose IoT remains Kantian in spite of Heidegger’s prolix lecturing. And, spatial attributes do still exist.
No? … Really? Phew I was worried about my job for a minute!
(Actually I always wanted to drag Heidegger into a post on maps.)

IoT Things
Of course, IoT just wants “things”, “stuff”, “devices” to have a part in the cloud just like the rest of us. Dualism, Monism who cares? It’s all about messages. Which is where Microsoft Azure IoT comes in.

Devices, sensors, are just small computers for which Microsoft introduced Windows IoT Core. This is a scaled down Windows OS for devices like Raspberry Pi, offered freely to feed the IoT Hub. The Maker community can now use Windows and Visual Studio Express to latch up Gpio and send telemetry messages via Bluetooth or WiFi. At $49, Microsoft’s Raspberry Pi 3 Starter Kit offers the latest single board computer with a MicroSD embedded Window IoT Core for experimenters. It should make hardware playtime easier for anyone in the Microsoft community.

Azure IoT Hub is the key piece of technology. IoT Hub is infrastructure for handling messages across a wide array of devices and software which scales to enterprise dimensions. Security, monitoring, and device management are built in. The value proposition is easy to see if you’ve ever dealt with fleet management or SCADA networks. Instead of writing services on multiple VMs to catch tcp packets and sort to various storage and events, it’s easy to sign up for an Azure IoT Hub and let Azure worry about reliability, scaling, and security.

Note that Machine Learning is part of the platform diagram. Satya Nadella’s Build 2016 keynote emphasized “the intelligent cloud” and of course R Project plays a role in predictive intelligence, so we can begin to see Microsoft marshalling services and tools for the next generation of cloud AI.

Thinking of ubiquitous sensors naturally (or unnaturally depending on your pre-disposition regarding the depravity of man and machine), brings to mind primitive organism possibilities as well as shades of Hal. Also noteworthy, “IoT Message Queues can be bidirectional,” so the order of Things and Humans can easily be reversed. Perhaps Microsoft’s embrace of artificial intelligence will cycle it back to the preeminent “seat of evil corporate empire” currently occupied by Google.

Azure IoT Hub deployment

Fig 4 – Azure Portal IoT Hub deployment

Once the Azure IoT Hub is deployed the next step is to add a Stream Analytics Job to the pipeline. These are jobs for processing telemetry streams into sinks such as SQL storage or visualizations. A Stream Analytic Job connects an input to an output with a processing query filter in between.

A much more involved example of an IoT and Mobile App is furnished by Microsoft: My Driving
Microsoft’s complete solution is available on GitHub with details.

Summary

Microsoft is forging ahead with Azure, offering numerous infrastructure options that make IoT a real possibility for small and medium businesses. Collecting data from diverse devices is getting easier with the addition of Windows IoT Core, VS2015 Xamarin, and Azure IoT Hubs with Stream Analytic Jobs. Fleet management services will never be the same.

Spatial data still plays a big part in telemetry since every stationary sensor involves a location and every mobile device a gps stream. Ubiquitous sensor networks imply the need for spatial sorting and visualization, at least while humans are still in the loop. Remove the human and Heidegger’s “abolition of all distances” reappears, but then sadly you and I disappear.

The main stay of web mapping applications for the last couple of decades has been three tier: Model – SQL, View – web UI, and Controller – server code. There are many variations on this theme: models residing in image tile pyramids, SQL Server, PostGIS, or Oracle; controller server code as Java, C#, or PHP. The visible action is on the viewer side. Html5 with ever expanding JavaScript libraries like jQuery, bootstrap, and angular.js make life interesting, while node.js is pushing JavaScript upstream to the controller.

For building end user applications it helps to know all three tiers and have at least one tool in each. With the right tools you can eventually accomplish just about anything spatially interesting. Emphasis is on the word “eventually.” SQL <=> C# <=> html5/JavaScript is very powerful, but extravagant for “one off” analytical work.

For ad hoc spatial work it was usually best to stick to a desk top application such as one of the big dollar Arc___ variations or better yet something open source like QGIS. In the early days these generally consisted of modular C/C++ functions threaded together with an all-purpose scripting language. If you wanted to get a little closer to the geo engine, knowledge of a scripting language (PHP, TCL, Python, or Ruby) helped to script modular toolkits like GDAL/OGR, OSSIM, GEOS, or GMT. This all works fine except for learning and relearning often arcane syntax, while repeatedly discovering and reading data documentation on various public resources from Census, USGS, NOAA, NASA, JPL … you get the idea.

R changes things in the geospatial world. The R project originated as a modular statistics and graphics toolkit. Unless you happen to be a true math prodigy, statistics are best visualized graphically. With powerful graphics libraries, R has evolved into a useful platform for ad hoc spatial analysis.

Coupled with an IDE such as RStudio, or the new Microsoft R Tools for Visual Studio, R wraps a large stable of component libraries into a script interpreter environment, ideal for “one off” analysis. Although learning arcane syntax is still a prerequisite, there is at least a universal environment with a really large contributor community. You can think of it as open source replacement for Tableau or Power BI but without proprietary limitations.

Community contributions are found in CRAN, Comprehensive R Archive Network for the R programming language. A search of CRAN or MRAN (Microsoft R Archive Network) for the term “spatial” yields a list of 145 R libraries.

For example, tigris is a useful library for reading US Census TIGER files. With just a couple lines of R scripting you can zoom around a polygonal plot of US Census urban areas. Library(tigris) handles all the details of obtaining the TIGER polygons and loading into local memory. Library(leaflet) handles creating the polygons and displaying over a default Leaflet map as tiles.

RTVS R Tools for Visual StudioRTVS R Tools for Visual Studio
Microsoft R Visual Studio IDE using the Data Science R settings. Users of Visual Studio will find all the familiar debug stepping, variable explorer, and intellisense editing they are using for other development languages.

R provides lots of interesting modules that help with spatial analytics. The script engine makes it easy to perform ad hoc visualization and publish the results online. However, there are limitations in performance and extents that make it more of a competitor to desktop GIS products or the newer commercial data visualizers like Tableau or PowerBI. For public facing web applications with generalized extents three tier performance using SQL + server code + web UI still makes the most sense.

The advent of Microsoft R Server and SQL Server R Services add scaling performance to make R solutions more competitive with the venerable three tier approach. It will be interesting to see how developers make use of SQL Server R Services. As a method of adding raster functionality to SQL Server, R sp_execute_external_script overlaps somewhat with PostGIS Raster. Exploring SQL Server 2016 R Services must await a future post.

Fig 1 – Population skyline of New York - Census SF1 P0010001 and Bing Road Map

Demographic Terrain

My last blog post, Demographic Landscapes, described leveraging new SFCGAL PostGIS 3D functions to display 3D extruded polygons with X3Dom. However, there are other ways to display census demographics with WebGL. WebGL meshes are a good way to handle terrain DEM. In essence the US Census is providing demographic value terrains, so 3D terrain meshes are an intriguing approach to visualization.

Babylon.js is a powerful 3D engine written in javascript for rendering WebGL scenes. In order to use Babylon.js, US Census SF1 data needs to be added as WebGL 3D mesh objects. Babylon.js offers a low effort tool for generating these meshes from grayscale height map images: BABYLON.Mesh.CreateGroundFromHeightMap.

Modifying the Census WMS service to produce grayscale images at the highest SF1 polygon resolution i.e. tabblock, is the easiest approach to generating these meshes. I added an additional custom request to my WMS service, “request=GetHeightMap,” which returns a PixelFormat.Format32bppPArgb bitmap with demographic values coded as grayscale. This is equivalent to a 255 range classifier for population values.

Microsoft HoloLens will up the coolness but not the hipster factor while improving usability immensely ( when released ). I’m inclined to the minimalist movement myself, but I’d be willing to write a Windows 10 app with the HoloLens SDK to see how well it performs.

PostGIS 2.2 is due for release sometime in August of 2015. Among other things, PostGIS 2.2 adds some interesting 3D functions via SFCGAL. ST_Extrude in tandem with ST_AsX3D offers a simple way to view a census polygon map in 3D. With these functions built into PostGIS, queries returning x3d text are possible.

x3d format is not directly visible in a browser, but it can be packaged into x3dom for use in any WebGL enabled browser. Packaging x3d into an x3dom container allows return of an .x3d mime type model/x3d+xml, for an x3dom inline content html.

In this case I added a non-standard WMS service adapted to add a new kind of request, GetX3D.

x3d is an open xml standards for leveraging immediate mode WebGL graphics in the browser. x3dom is an open source javascript library for translating x3d xml into WebGL in the browser.

“X3DOM (pronounced X-Freedom) is an open-source framework and runtime for 3D graphics on the Web. It can be freely used for non-commercial and commercial purposes, and is dual-licensed under MIT and GPL license.”

If you can see this, your browser doesn’t
understand IFRAME. However, we’ll stilllink
you to the file.

Why X3D?

I’ll admit it’s fun, but novelty may not always be helpful. Adding elevation does show demographic values in finer detail than the coarse classification used by a thematic color range. This experiment did not delve into the bivariate world, but multiple value modelling is possible using color and elevation with perhaps less interpretive misgivings than a bivariate thematic color scheme.

However, pondering the visionary, why should analysts be left out of the upcoming immersive world? If Occulus Rift, HoloLens, or Google Cardboard are part of our future, analysts will want to wander through population landscapes exploring avenues of millennials and valleys of the aged. My primitive experiments show only a bit of demographic landscape but eventually demographic terrain streams will be layer choices available to web users for exploration.

Demographic landscapes like the census are still grounded, tethered to real objects. The towering polygon on the left recapitulates the geophysical apartment highrise, a looming block of 18-22 year olds reflect a military base. But models potentially float away from geophysical grounding. Facebook networks are less about physical location than network relation. Abstracted models of relationship are also subject to helpful visualization in 3D. Unfortunately we have only a paltry few dimensions to work with, ruling out value landscapes of higher dimensions.

Fig 3 – P0010001 Jenks Population

Some problems

For some reason IE11 always reverts to software rendering instead of using the system’s GPU. Chrome provides hardware rendering with consequent smoother user experience. Obviously the level of GPU support available on the client directly correlates to maximum x3dom complexity and user experience.

In some cases the ST_Extrude result is rendered to odd surfaces with multiple artifacts. Here is an example with low population in eastern Colorado. Perhaps the extrusion surface breaks down due to tessellation issues on complex polygons with zero or near zero height. This warrants further experimentation.

Fig 2 – rendering artifacts at near zero elevations

The performance complexity curve on a typical client is fairly low. It’s tricky to keep the model sizes small enough for acceptable performance. IE11 is especially vulnerable to collapse due to software rendering. In this experiment the x3d view is limited to the intersections with extents of the selected territory using turf.js.

var extent = turf.extent(app.territory);

In addition making use of PostGIS ST_SimplifyPreserveTopology helps reduce polygon complexity.

Xml formats like x3d tend to be verbose and newer lightweight frameworks prefer a json interchange. Json for WebGL is not officially standardized but there are a few resources available.

An interesting GIS development over the years has been the evolution from monolithic applications to multiple open source plug and play tools. Considering GIS as a three tier system with back end storage, a controller in the middle, and a UI display out front, more and more of the middle tier is migrating to either end.

SQL DBs, such as SQL Server, Oracle Spatial, and especially PostGIS, now implement a multitude of GIS functions originally considered middle tier domain. On the other end, the good folks at turf.js and proj4js continue to push atomic GIS functions out to javascript, where they can fit into the client side UI. The middle tier is getting thinner and thinner as time goes on. Generally the middle is what costs money, so rolling your own middle with less work is a good thing. As a desirable side effect, instead of kitchen sink GIS, very specific user tools can be cobbled together as needed.

Looking for an excuse to experiment with turf.js, I decided to create a Territory Builder utilizing turf.js on the client and some of the US Census Bureau Services on the backend.

US Census Bureau does expose some “useful” services at TigerWeb. I tend to agree with Brian Timoney that .gov should stick to generating data exposed in useful ways. Apps and presentation are fast evolving targets and historically .gov can’t really hope to keep up. Although you can use the census custom TigerWeb applications for some needs, there are many other occasions when you would like to build something less generic. For example a Territory builder over Bing Maps.

Fig 2 – Territory polygon with a demographic overlay on top of Bing Maps

Note: Because there doesn’t appear to be an efficient way to join demographic data to geography with current TigerWeb service APIs, the demographic tab of this app uses a custom WMS PostGIS backend, hooking SF1 to TIGER polygons.

WMS GetMap requests simply return an image. In order to overlay the image on a Bing Map this Territory app uses an html5 canvas and context app.context.drawImage(imageObj, 0, 0); The trick is to stack the canvas in between the Bing Map and the Bing Map navigation, scale, and selector tools. TigerWeb WMS conveniently exposes epsg:3857 which correctly aligns with Bing Map tiles.

Unfortunately the WMS GetMap request has no IDs available for polygons, even if requested format is image/svg+xml. SVG is an xml format and could easily contain associated GeoID values for joining with other data resources, but this is contrary to the spirit of OGC WMS service specs. Naturally we must obey OGC Law, which is too bad. Adding a GeoID attribute would allow options such as choropleth fills directly on existing svg paths. For example adding id = “80132″ would allow fill colors by zipcode with a bit of javascript.

The GEOID retrieved from our proxied GetFeatureInfo request allows us to grab vertices with another TigerWeb service, Census REST.

This spec is a little more proprietary and requires some detective work to unravel.FeatureUrl.url = “http://tigerweb.geo.census.gov/arcgis/rest/services/TIGERweb/PUMA_TAD_TAZ_UGA_ZCTA/MapServer/1/query?where=GEOID%3D” + geoid + “&geometryPrecision=6&outSR=4326&f=pjson”;

There are different endpoint urls for the various polygon types. In this case Zip Codes are found in PUMA_TAD_TAZ_UGA_ZCTA. We don’t need more than 1 meter resolution so precision 6 is good enough and we would like the results in epsg:4326 to avoid a proj4 transform on the client.

Census REST doesn’t appear to offer a simplify parameter so the coordinates returned are at the highest resolution. Highly detailed polygons can easily return several thousand vertices, which is a problem for performance, but the trade-off is eliminating hosting data ourselves.

TigerWeb offers some useful data access. With TigerWeb WMS and REST api, developers can customize apps without hosting and maintaining a backend SQL store. However, there are some drawbacks.

Some potential improvements:
1. Adding an svg id=GeoID would really improve the usefulness of TigerWeb WMS image/svg+xml, possibly eliminating steps 2 and 3 of the workflow.

2. Technically it’s possible to use the TigerWeb REST api to query geojson by area, but practically speaking the results are too detailed for useful performance. A helpful option for TigerWeb REST would be a parameter to request simplified polygons and avoid lengthy vertice transfers.

turf.js is a great tool box, however, occasionally the merge function had trouble with complex polygons from TigerWeb.

Fig 1 - SF1QP Quantile Population County P0010001 P1.TOTAL POPULATION Universe: Total population

Preparation for US 2020 Census is underway at this mid-decennial point and we’ll see activity ramping up over the next few years. Will 2020 be the last meaningful decennial demographic data dump? US Census has been a data resource since 1790. It took a couple centuries for Census data to migrate into the digital age, but by Census 2000, data started trickling into the internet community. At first this was simply a primitive ftp data dump, ftp2.census.gov, still very useful for developers, and finally after 2011 exposed as OGC WMS, TigerWeb UI, and ESRI REST.

However, static data in general, and decennial static data in particular, is fast becoming anachronistic in the modern era. Surely the NSA data tree looks something like phone number JOIN Facebook account JOIN Twitter account JOIN social security id JOIN bank records JOIN IRS records JOIN medical records JOIN DNA sequence….. Why should this data access be limited to a few black budget bureaus? Once the data tree is altered a bit to include mobile devices, static demographics are a thing of the past. Queries in 2030 may well ask “how many 34 year old male Hispanic heads of households with greater than 3 dependents with a genetic predisposition to diabetes are in downtown Denver Wed at 10:38AM, at 10:00PM?” For that matter let’s run the location animation at 10 minute intervals for Tuesday and then compare with Sat.

“Privacy? We don’t need no stinking privacy!”

I suppose Men and Black may find location aware DNA queries useful for weeding out hostile alien grays, but shouldn’t local cancer support groups also be able to ping potential members as they wander by Star Bucks? Why not allow soda vending machines to check for your diabetic potential and credit before offering appropriate selections? BTW How’s that veggie smoothie?

By late 2011 census OGC services began to appear along with some front end data web UIs, and ESRI REST interfaces. [The ESRI connection is a tightly coupled symbiotic relationship as the Census Bureau, like many government bureaucracies, relies on ESRI products for both publishing and consuming data. From the outside ESRI could pass as an agency of the federal government. For better or worse “Arc this and that” are deeply rooted in the .gov GIS community.]

For mapping purposes there are two pillars of Census data, spatial and demographic. The spatial data largely resides as TIGER data while the demographic data is scattered across a large range of products and data formats. In the basement, a primary demographic resource is the SF1, Summary File 1, population data.

“Summary File 1 (SF 1) contains the data compiled from the questions asked of all people and about every housing unit. Population items include sex, age, race, Hispanic or Latino origin, household relationship, household type, household size, family type, family size, and group quarters. Housing items include occupancy status, vacancy status, and tenure (whether a housing unit is owner-occupied or renter-occupied).”

The intersection of SF1 and TIGER is the base level concern of census demographic mapping. There are a variety of rendering options, but the venerable color themed choropleth map is still the most widely recognized. This consists of assigning a value class to a color range and rendering polygons with their associated value color. This then is the root visualization of Census demographics, TIGER polygons colored by SF1 population classification ranges.

Unfortunately, access to this basic visualization is not part of the 2010 TigerWeb UI.

There are likely a few reasons for this, even aside from the glacially slow adoption of technology at the Bureau of the Census. A couple of obvious reasons are the sheer size of this data resource and the range of the statistics gathered. A PostGIS database with 5 level primary spatial hierarchy, all 48 SF1 population value files, appropriate indices, plus a few helpful functions consumes a reasonable 302.445 GB of a generic Amazon EC2 SSD elastic block storage. But, contained in those 48 SF1 tables are 8912 demographic values which you are welcome to peruse here. A problem for any UI is how to make 8912 plus 5 spatial levels usable.

Fig 3 – 47 SF1 tables plus sf1geo geography join file

Filling a gap

Since the Census Bureau budget did not include public visualization of TIGER/Demographics what does it take to fill in the gap? Census 2010 contains a large number of geographic polygons. The core hierarchy for useful demographic visualization is state, county, tract, block group, and block.

Fig 4 – Census polygon hierarchy

Loading the data into PostGIS affords low cost access to data for SF1 Polygon value queries such as this:

These polygon counts rule out visualizations of the entire USA, or even moderate regions, at tract+ levels of the hierarchy. Vector mapping is not optimal here.

B. The number of possible image tile pyramids for 8912 values over 5 polygon levels is 5 * 8192 = 44,560. This rules out tile pyramids of any substantial depth without some deep Google like pockets for storage. Tile pyramids are not optimal either.

C. Even though vector grid pyramids would help with these 44,560 demographic variations, they suffer from the same restrictions as A. above.

One possible compromise of performance/visualization is to use an old fashioned OGC WMS GetMap request scheme that treats polygon types as layer parameters and demographic types as style parameters. With appropriate use of WMS <MinScaleDenominator> <MaxScaleDenominator> the layers are only rendered at sufficient zoom to reasonably limit the number of polygons. Using this scheme puts rendering computation right next to the DB on the same EC2 instance, while network latency is reduced to simple jpeg/png image download. Scaling access to public consumption is still problematic, but for in-house it does work.

There are still issues with a scale rendering approach. Since population is not very homogenous over US coverage extent, scale dependent rendering asks to be variable as well. This is easily visible over population centers. Without some type of pre-calculated density grid, the query is already completed prior to knowledge of the ideal scale dependency. Consequently, static rendering scales have to be tuned to high population urban regions. Since “fly over” US is generally less interesting to analysts, we can likely live with this compromise.

Dividing a value curve to display adequately over a viewport range can be accomplished in a few different ways: equal intervals, equal quantile, jenks natural break optimization, K-means clustering, or “other.” Leaning toward the simpler, I chose a default quantile (guarantees some color) with a ten class single hue progression which of course is not recommended by color brewer. However 10 seems an appropriate number for decennial data. I also included a jenks classifier option which is considered a better representation. The classifier is based only on visible polygons rather than the entire polygon population. This means comparisons region to region are deceptive, but after all this is visualization of statistics.

“There are three kinds of lies: lies, damned lies, and statistics.” Mark Twain

Fig 9 – SF1JP SF1 Jenks Census Tract P0010001 (not density)

In order to manage Census data on a personal budget these compromises are involved:

This is a workable map service for a small number of users. Exposing as an OGC WMS service offers some advantages. First there are already a ton of WMS clients available to actually see the results. Second, the Query, geometry parsing, and image computation (including any required re-projection) are all server side on the same instance reducing network traffic. Unfortunately the downside is that the computation cost is significant and discouraging for a public facing service.

Because this data happens to be read only for ten years, scaling is not too hard, as long as there is a budget. It would also be interesting to try some reconfiguration of data into NoSQL type key/value documents with perhaps each polygon document containing the 8912 values embedded along with the geometry. This would cost a bit in storage size but could decrease query times. NoSQL also offers some advantages for horizontal scaling.

Summary

The Census Bureau and its census are obviously not going away. The census is a bureaucracy with a curious inertial life stretching back to the founding of our country (United States Constitution Article 1, section 2). Although static aggregate data is not going to disappear, dynamic real time data has already arrived on stage in numerous and sundry ways from big data portals like Google, to marketing juggernauts like Coca Cola and the Democratic Party, to even more sinister black budget control regimes like the NSA.

Census data won’t disappear. It will simply be superseded.

The real issue for 2020 and beyond is, how to actually intelligently use the data. Already data overwhelms analytic capabilities. By 2030, will emerging AI manage floods of real time data replacing human analysts? If Wall Street still exists, will HFT algos lock in dynamic data pipelines at unheard of scale with no human intervention? Even with the help of tools like R Project perhaps the human end of data analysis will pass into anachronism along with the decennial Census.

In the background of the internet lies this ongoing discussion of epistemology. It’s an important discussion with links to crowd source algos, big data, and even AI. Perhaps it’s a stretch to include maps, which after all mean to represent “exactitude in science” or JTB, Justified True Belief. On the one hand we have the prescience of Jorge Luis Borges concisely represented by his single paragraph short story.

…In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

As Jorge Luis Borges so aptly implies, the issue of epistemology swings between scientific exactitude and cultural fondness, an artistic reference to the unsettling observations of Thomas Kuhn’s paradigm shiftiness, The Structure of Scientific Revolutions .

“The territory no longer precedes the map, nor does it survive it. It is nevertheless the map that precedes the territory—precession of simulacra—that engenders the territory”

In a less postmodern sense we can point to the recent spectacle of Nicaraguan sovereignty extending into Costa Rica, provoked by the preceding Google Map error, as a very literal “precession of simulacrum.” See details in Wired.

We now have map border wars and a crafty Google expedient of representing the Arunachal Pradesh according to client language. China sees one thing, but India another, and all are happy. So maps are not exempt from geopolitical machinations any more than Wikipedia. Of course the secular bias of Google invents an agnostic viewpoint of neither here nor there, in its course presuming a superior vantage and relegating “simplistic” nationalism to a subjected role of global ignorance. Not unexpectedly, global corporations wield power globally and therefore their interests lie supra nationally.

Perhaps in a Jean Baudrillard world the DPRK could disappear for ROK viewers and vice versa resolving a particularly long lived conflict.

“The best books, he perceived, are those that tell you what you know already.”
George Orwell, 1984 p185

The consumer is king and this holds true in search and advertising as well as in Aladdin’s tale. Search filters at the behest of advertising money work very well at fencing us into smaller and smaller bubbles of our own desire. The danger of self-referential input is well known as narcissism. We see this at work in contextual map bubbles displaying only relevant points of interest from past searches.

With google glasses self-referential virtual objects can literally mask any objectionable reality. Should a business desire to pop a filter bubble only a bit more money is required. In the end, map POI algorithms dictate desire by limiting context. Are “personalized” maps a hint of precession of simulacra or simply one more example of rampant technical narcissism?

Realpolitik

In the political realm elitists such as Cass Sunstein want to nudge us, which is a yearning of all mildly totalitarian states. Although cognitive infiltration will do in a pinch, “a boot stamping on a human face” is reserved for a last resort. How might the precession of simulacra assist the fulfillment of Orwellian dreams?

Naturally, political realities are less interested in our desires than their own. This is apparently a property of organizational ascendancy. Whether corporations or state agencies, at some point of critical mass organizations gain a life of their own. The organization eventually becomes predatory, preying on those they serve for survival. Political information bubbles are less about individual desires than survival of the state. To be blunt “nudge” is a euphemism for good old propaganda.

Fig 2 - Propaganda Map - more of a shove than a nudge

The line from Sunstein to a Clinton, of either gender, is short. Hillary Clinton has long decried the chaotic democracy of page ranked search algorithms. After noting that any and all ideas, even uncomfortable truths, can surface virally in a Drudge effect, Hillary would insist “we are all going to have to rethink how we deal with the Internet.” At least she seems to have thought creatively about State Dept emails. Truth is more than a bit horrifying to oligarchs of all types, as revealed by the treatment of Edward Snowden, Julian Assange, and Barrett Brown.

Truth Vaults

Enter Google’s aspiration to Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources. In other words a “truth page ranking” to supplant the venerable but messily democratic “link page ranking.” Why, after all, leave discretion or critical thought to the unqualified masses? For the history minded, this is rather reminiscent of pre-reformation exercise of Rome’s magisterium. We may soon see a Google Magisterium defining internet truth, albeit subject to FCC review.

“The net may be “neutral” but the FCC is most certainly not.”

According to Google: “Nothing but the truth.” I mean who could object? Well there seem to be some doubters among the hoi polloi. How then does this Google epistemology actually work? What exactly is Justified True Belief in Google’s Magisterium and how much does it effectively overlap with the politically powerful?

“The fact extraction process we use is based on the Knowledge Vault (KV) project.”

“Knowledge Vault has pulled in 1.6 billion facts to date. Of these, 271 million are rated as “confident facts”, to which Google’s model ascribes a more than 90 per cent chance of being true. It does this by cross-referencing new facts with what it already knows.”

“Google’s Knowledge Graph is currently bigger than the Knowledge Vault, but it only includes manually integrated sources such as the CIA Factbook.”

“This is the most visionary thing,” says Suchanek. “The Knowledge Vault can model history and society.”

Per Jean Baudrillard read “model” as a verb rather than a thing. Google (is it possible to do this unwittingly?) arrogates a means to condition the present, in order to model the past, to control our future, to paraphrase the Orwellian syllogism.

“Who controls the past controls the future. Who controls the present controls the past.”
George Orwell, 1984

“LazyTruth developer Matt Stempeck, now the director of civic media at Microsoft New York, wants to develop software that exports the knowledge found in fact-checking services such as Snopes, PolitiFact, and FactCheck.org so that everyone has easy access to them.”

“Everybody should be questioning,” says McNutt. “That’s a hallmark of a scientist. But then they should use the scientific method, or trust people using the scientific method, to decide which way they fall on those questions.”

Ah yes the consensus of “Experts,” naturally leading to the JTB question, whose experts? IPCC may do well to reflect on Copernicus in regards to ancien régime and scientific consensus.

Google’s penchant for metrics and algorithmic “neutrality” neatly papers over the Mechanical Turk or two in the vault so to speak.

Future of simulacra

In a pre-digital Soviet era, map propaganda was an expensive proposition. Interestingly today Potemkin maps are an anachronistic cash cow with only marginal propaganda value. Tomorrow’s Potemkin maps according to Microsoft will be much more entertaining but also a bit creepy if coupled to brain interfaces. Brain controls are inevitably a two way street.

“Microsoft HoloLens understands your movements, vision, and voice, enabling you to interact with content and information in the most natural way possible.”

The only question is, who is interacting with content in the most un-natural way possible in the Truth Vault?

Is our cultural fondness leaning toward globally agnostic maps of infinite plasticity, one world per person? Jean Baudrillard would likely presume the Google relativistic map is the order of the day, where precession of simulacra induces a customized world generated in some kind of propagandistic nirvana, tailored for each individual.

But just perhaps, the subtle art of Jorge Luis Borges would speak to a future of less exactitude:

“still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.”

I suppose to be human is to straddle exactitude and art, never sure whether to land on truth or on beauty. Either way, we do well to Beware of Truth Vaults!

Most modern browsers now support HTML5 WebGL standard: Internet Explorer 11+, Firefox 4+, Google Chrome 9+, Opera 12+
One of the latest to the party is IE 11.

Fig 2 – html5 test site showing WebGL support for IE11

WebGL support means that GPU power is available to javascript developers in supporting browsers. GPU technology fuels the $46.5 billion “vicarious life” industry. Video gaming revenues surpass even Hollywood movie tickets in annual revenues, but this projection shows a falling revenue curve by 2019. Hard to say why the decline, but is it possibly an economic side effect of too much vicarious living? The relative merits of passive versus active forms of “vicarious living” are debatable, but as long as technology chases these vast sums of money, GPU geometry pipeline performance will continue to improve year over year.

WebGL exposes immediate mode graphics pipelines for fast 3D transforms, lighting, shading, animations, and other amazing stuff. GPU induced endorphin bursts do have their social consequences. Apparently, Huxley’s futuristic vision has won out over Orwell’s, at least in internet culture.

“In short, Orwell feared that what we fear will ruin us. Huxley feared that our desire will ruin us.”

Aside from the Soma like addictive qualities of game playing, game creation is actually a lot of work. Setting up WebGL scenes with objects, textures, shaders, transforms … is not a trivial task, which is where Dave Catuhe’s Babylon.js framework comes in. Dave has been building 3D engines for a long time. In fact I’ve played with some of Dave’s earlier efforts in Ye olde Silverlight days of yore.

“I am a real fan of 3D development. Since I was 16, I spent all my spare time creating 3d engines with various technologies (DirectX, OpenGL, Silverlight 5, pure software, etc.). My happiness was complete when I discovered that Internet Explorer 11 has native support for WebGL. So I decided to write once again a new 3D engine but this time using WebGL and my beloved JavaScript.”

Dave’s efforts improve with each iteration and Babylon.js is a wonderfully powerful yet simple to use javascript WebGL engine. The usefulness/complexity curve is a rising trend. To be sure a full fledged gaming environment is still a lot of work. With babylon.js much of the heavy lifting falls to the art design guys. From a mapping perspective I’m happy to forego the gaming, but still enjoy some impressive 3D map building with low effort.

In order to try out babylon.js I went back to an old standby, NASA Earth Observation data. NASA has kindly provided an OGC WMS server for their earth data. Brushing off some old code I made use of babylon.js to display NEO data on a rotating globe.

True hasAlpha lets us show a secondary earth texture through the NEO overlay where data was not collected. For example Bathymetry, GEBCO_BATHY, leaves holes for the continental masses that are transparent making the earth texture underneath visible. Alpha sliders could also be added to stack several NEO layers, but that’s another project.

Fig 7 – alpha bathymetry texture over earth texture

Since a rotating globe can be annoying it’s worthwhile adding a toggle switch for the rotation weary. One simple method is to make use of a Babylon pick event:

In this case any click ray that intersects the globe will toggle globe rotation on and off. Click picking is a kind of collision checking for object intersection in the scene which could be very handy for adding globe interaction. In addition to pickedMesh.id, pickResult gives a pickedPoint location, which could be reverse transformed to a latitude,longitude.

Starbox (no coffee involved) is a quick way to add a surrounding background in 3D. It’s really just a BABYLON.Mesh.CreateBox big enough to engulf the earth sphere, a very limited kind of cosmos. The stars are not astronomically accurate just added for some mood setting.

Another handy BABYLON Feature is BABYLON.Mesh.CreateGroundFromHeightMap

For example using a grayscale elevation image as a HeightMap will add exaggerated elevation values to a ground map:

Fig 8 – elevation grayscale jpeg for use in BABYLON HeightMap

Fig -9 – HeightMap applied

The HeightMap can be any value for example NEO monthly fires converted to grayscale will show fire density over the surface.

Fig 10 – NEO monthly fires as heightmap

In this case a first person shooter, FPS, camera was substituted for a generic ArcRotate Camera so users can stalk around the earth looking at fire spikes.

“FreeCamera – This is a ‘first person shooter’ (FPS) type of camera where you control the camera with the mouse and the cursors keys.”

Lots of camera choices are listed here including Oculus Rift which promises some truly immersive map opportunities. I assume this note indicates Babylon is waiting on the retail release of Oculus to finalize a camera controller.

“The OculusCamera works closely with our Babylon.js OculusController class. More will be written about that, soon, and nearby.Another Note: In newer versions of Babylon.js, the OculusOrientedCamera constructor is no longer available, nor is its .BuildOculusStereoCamera function. Stay tuned for more information.”

So it may be only a bit longer before “vicarious life” downhill skiing opportunities are added to FreshyMap.