Category: Technical

I was a guest earlier this week at HERE Techologies at the Consumer Electronics Show (CES) 2019 in Las Vegas, the world’s biggest consumer electronics trade show. Their booth was directly right outside the main entrance to the Convention Centre, the hub of CES, right beside Google’s own huge one. The juxtaposition was interesting, the two companies competing intensely in some areas of location services (e.g. mapping APIs, journey routing and rich global POI databases) while being distinctly different in their approach – Google being very consumer focused with its ubiquitous brand, its location tools being largely smartphone based and advertising/user profile driven while HERE’s European origins are reflected in its strict user anonymisation defaults, its main datasource being car sensor information from cars (e.g. some of the major car companies are the key investors in HERE), and its mainly B2B focus which means that the UI you typically in front of HERE’s location intelligence is typically branded from the car company itself.

HERE’s location marketplace

The car sensor information drives much of the 5 million updates made every day (generally automatically) to its global master map and also means that HERE has a pretty good live traffic data stream of its own. The global master map also contains 160 million+ POIs (points of interest) – it’s a seriously large database – which HERE has collected, collated and bought from a wide variety of sources. The map is a core part of HERE’s overall location platform offering.

HERE’s booth was a hive of activity, with product demos downstairs (themed around “the new reality”) and a small stage, while upstairs, numerous meeting rooms were full all day, presumably with various meetings between HERE executives and at a guess, car companies looking for platforms to power their car/user information systems, city transportation agencies looking for new datasets to understand their city roads more effectively, and other key potential stakeholders in HERE’s location platforms. The the breakout areas were also well used and even a little outdoor cafe/terrace overlooking the main entrance to the convention centre.

The HERE XYZ developer API.

Our group was introduced to a number of people at HERE, including the CEO and various product managers. Of particular interest to me were the Fleet and Developer API talks – the former because of the “enterprise level” travelling-salesman-problem type (actually the vehicle-routing-with-prizes problem) functionality that is a core part of the platform, and the latter because I’ve already used a little bit of the HERE mapping APIs.

Fleet Management (the “travellling salesmen problem” solver)

SoMo

I also chatted to the HERE Mobility team who also had a presence in the HERE booth and also their own display in the main exhibition halls. HERE Mobility, who operate almost as a “start-up” within HERE, have the most obvious “consumer” presence of HERE, and launched their new “SoMo” app, which aims to be an “honest broker” multi-mobility navigation too. SoMo, which is short for Social Mobility, aims to offer various rideshare options from third parties, as well as transit and driving information – it’s key distinction, apart from being a platform for smaller rideshares, is to allow easy pooling of ride opportunities and friends/contacts who also need to journey to the same place.

They have identified a number of scenarios where this is useful, for example, people from a particular neighbourhood who are all planning to go to a music concert in a specific venue in another part of a city. The theory being that fans of the same artist might want to travel together and pool the costs, and find a good value or available service, where the “big two” rideshares Uber and Lyft, who are not on the platform (and indeed are building their own multimodal platforms) may be not present in a particular city or don’t have the necessarily availability or good price point on the ground.

SoMo will likely work best when you have a number of friends/contacts using it, and sufficient coverage of timely services in the cities where the users are. As such, it will live or die by the volumes of people using it, hence their big push to have the new app downloaded as widely as possible.

One HERE announcement at CES that is of immediate to me – my Alexa Echo Dot is finally location aware, worldwide – it was frustrating that it was unable to give me directions or time estimates, while my Google Home Mini was able to – but Amazon and HERE announced a partnership where the HERE location platform (with its routing capability, traffic awareness and huge map and POI database underlying it) provides location information in response to relevant queries to Alexa. This is not through an add-on “skill” (Alexa’s terminology for apps) but is built in to the core of the device’s response framework.

More map layers and location data available through HERE APIs.

Thank you to HERE Technology for inviting me to CES and organising the trip and insight day.

Panama is a Central American country with around 4 million population. The country is split into 10 provinces (including one that was split from another in 2014). The population is obliged to register for and obtain an ID card, or “cedula” which contains an interesting attribute. The prefix of their ID number indicates their province of birth. This not only allows the mapping and analysis of surname (and other) demographic information across the country, but also, if combined with information on current location, even allows for a rudimentary analysis of internal migration in the country.

This official document contains lots of useful information. Subsequent to this, the “Panama” province within the country has split into two, with the westernmost section becoming Panama West and gaining a new province number 13. In practice, the great majority of people living here retain the prefix 8 as the population with “13-” prefixes will be too young to have appeared on school attendance lists, jury service lists, exam candidate lists or government worker salary transparency lists. Here is the very No. 13: Ashly Ríos, getting the number 13-1-001. (People are required to obtain their number by the age of 18 but you can be registered at birth.)

For most people, born in Panama, their cedula number prefix indicates the following provinces of birth:

The format of the cedula number is generally X-YYY-ZZZZ where X is the province number, YYY is the registry book number and ZZZZ is the number within the book. However, for certain groups, the prefix is different. If SB appears after the province prefix, this is an indication that the person was born in Guna Yala (formerly called San Blas), but before it became a standalone indigenous province. Other indigenous areas, some of which have not formally become provinces, were indicated by PI appearing after the prefix of the former or enclosing province, or AV if very old (born pre-1914). However, the numerical codes are now used.

Panamanians born outside the country get “PE” as their prefix instead. Foreigners are assigned “EE” while they retain their immigrant status. If they gain permanent residence rights, they are assigned “E”, and if they become full Panamanian citizens, they are assigned “N”. PE, N, E and EE do not officially have an associated province prefix, although one is occasionally added in third-party lists, or “00”. So, these people can also be assigned a separate ID, starting with “NT” and with an associated province prefix, this is a temporary ID issued for tax purposes, rather than a full cedula number.

I will aim to update based on feedback and new discovery. This initial version is based on my own usages/experiences in the field, so it is quite possible there are some very obvious candidates I have missed.

Additionally (and with the some proviso as above) here’s a 2×2 table of file formats used in slippy and static web mapping, for vectors and rasters – the latter including attribute fields like UTF Grids. I am only including formats widely used in web mapping, rather than GIS in general.

So Big Data Here, a little pop-up exhibition of hyperlocal data, has just closed, having run continuously from Tuesday evening to this morning, as part of Big Data Week. We had many people peering through the windows of the characterful North Lodge building beside UCL’s main entrance on Gower Street, particularly during the evening rush hour, when the main projection was obvious through the windows in the dark, and some interested visitors were also able to come inside the room itself and take a closer look during our open sessions on Wednesday, Thursday and Friday afternoons.

Thanks to the Centre for Advanced Spatial Analysis (CASA) for loaning the special floor-mounted projector and the iPad Wall, the Consumer Data Research Centre (CDRC) for arranging for the exhibition with UCL Events, Steven Gray for helping with the configuration and setup of the iPad Wall, Bala Soundararaj for creating visuals of footfall data for 4 of the 12 iPad Wall panels, Jeff for logistics help, Navta for publicity and Wen, Tian, Roberto, Bala and Sarah for helping with the open sessions and logistics.

I created three custom local data visualisations for the big screen that was the main exhibit in the pop-up. Each of these was shown for around 24 hours, but you can relive the experience on the comfort of your own computer:

1. Arrival Board

This was shown from Tuesday until Wednesday evening, and consisted of a live souped-up “countdown” board for the bus stop outside, alongside one for Euston Square tube station just up the road. Both bus stops and tube stations in London have predicted arrival information supplied by TfL through a “push” API. My code was based on a nice bit of sample code from GitHub, created by one of TfL’s developers. You can see the Arrival Board here or Download the code on Github. This is a slightly enhanced version that includes additional information (e.g. bus registration numbers) that I had to hide due to space constraints, during the exhibition.

Customisation: Note that you need to specify a Naptan ID on the URL to show your bus stop or tube station of choice. To find it out, go here, click “Buses” or “Tube…”, then select your route/line, then the stop/station. Once you are viewing the individual stop page, note the Naptan ID forms part of the URL – copy it and paste it into the Arrival Board URL. For example, the Naptan ID for this page is 940GZZLUBSC, so your Arrival Baord URL needs to be this.

2. Traffic Cameras

This was shown from Wednesday evening until Friday morning, and consisted of a looping video feed from the TfL traffic camera positioned right outside the North Lodge. The feed is a 10 second loop and is updated every five minutes. The exhibition version then had 12 other feeds, surrounding the main one and representing the nearest camera in each direction. The code is a slightly modified version of the London Panopticon which you can also get the code for on Github.

Customisation: You can specify a custom location by adding ?lat=X&lon=Y to the URL, using decimal coordinates – find these out from OpenStreetMap. (N.B. TfL has recently changed the way it makes available the list of traffic cameras, so the list used by London Panopticon may not be completely up-to-date.)

Customisation: This one needs a file for each area it is used in and unfortunately I have, for now, only produced one for Bloomsbury. The data originally came, via the NOMIS download service, from the Office for National Statistics and is Crown Copyright.

As a follow-up to my intro post about Tube Heartbeat, here’s some notes on the API usage that allowed me to get the digital cartography right, and build out the interactive visualisation I wanted to.

The key technology behind the visualisation is the HERE JavaScript API. This not only displays the background HERE map tiles and provides the “slippy map” panning/zoom and scale controls, but also allows the transportation data to be easily overlaid on top. It’s the first project I’ve created on the HERE platform and the API was easy to get to grips with. The documentation includes plenty of examples, as well the API reference.

The top feature of the API for me is that it is very fast, both on desktop browsers but also on smartphones. I have struggled in the past with needing to optimise code or reduce functionality, to show interactive mapped content on smartphones – not just needing to design a small-screen UI, but dealing with the browser struggling to show sometimes complex and large-volume spatial data. The API has some nice specific features too, here’s some that I used:

Arrows

One of the smallest features, but a very nice one I haven’t come across elsewhere, is the addition of arrows along vector lines, showing their direction. Useful for routing, but also useful for showing which flow is currently being shown on a bi-directional dataset – all the lines on Tube Heartbeat use it:

The frequency that the arrows occur can be specified, as well as their width and length. I’m using quite elongated ones, which are 3 times as long as they are wide, and occupy the middle half of the arrow (above/below certain flow thresholds, I used different numbers). A frequency of 2 means there is an arrow-sized gap between each one. Using 1 results in a continuous stream of arrows. (N.B. Rendering quirks in some browsers mean that other gaps may appear too.) Here, the blue and red segments have a frequency of 1 and a width of 0.2, while the smaller flows in the brown segments are shown with the frequency of 2 and width of 0.5 in the example code above:

Z-Order

Z-order is important so that the map has a natural hierarchy of data. I decided to use an order where the busiest tube lines were generally at the bottom, with the quieter lines being layered on top of them (i.e. having a higher Z-order). Because the busier tube lines are shown with correspondingly fatter vector lines on the map, the ordering means that generally all the data can be seen at once, rather some lines being hidden. You can see the order in the penultimate column of my lines data file (CSV). I’m specifying z-order simply as a custom object “zorder” on the H.map.Polyline, as shown in the code sample above. This then gets used later when assembling the lines to draw, in a group (see below).

Translucency

I’m using translucency both as a cartographical tool and to ensure that data does not otherwise become invisible. The latter is simply achieved by using RGBA colours rather than the more normal hexadecimals; that is, colours with a opacity specified as well as the colour components. In the code block above, “rgba(255, 255, 255, 0.5)” gives white arrows which are only 50% opaque. The tube lines themselves are shown as 70% opaque – specified in lines data file along with the z-order – this allows their colour to appear strongly while allowing other lines or background map features/captions, such as road or neighborhood names, to still be observable.

While objects such as the tube lines can be made translucent by manipulating their colour values, layers themselves always display at 100% opacity. This is probably a good thing because translucent map image layers could look a mess, if you layered multiple ones on top of each other, but it means you need to use a different technique if you want to tint or fade a layer. Because even the simplified “base” background map tiles from HERE for London have a lot of detail on them, and the “xbase” extra-simplified ones don’t have enough for my purposes, I needed a half-way house approach. I acheived this by creating a geographical object in code and placing it on top of the layers:

The object here is a very light gray box, at 35% opacity, with an extent that covers all of the London area and well beyond. In HERE JavaScript API, such objects automatically go on top of the layers. My tint doesn’t affect the lines or stations, because I add two groups, containing them, after my rectangle:

Object Groups

I can add and remove objects from the above groups rather than directly to the map object, and the groups themselves remain in place, ordered above my tint and the background map layers. Objects are drawn in the order they appear in the group, the so-called “Painters Algorithm“, hence why I sort using my previously specified “zorder” object’s value, earlier:

These are my station circles. They are thickly bordered white circles, as is the tradition for stations on maps of the London Underground as well as many other metros worldwide, but with a little bit of translucent to allow background map details to still be glimpsed. Here you can see the circle translucencies, as well as those on the lines, and the arrows themselves, the lines also being ordered as per the z-order specification, so that the popular Victoria line (light blue) doesn't obscure the Northern line (black):

Other Technologies

As well as the HERE JavaScript API, I used JQuery to short-cut some of the non-map JavaScript coding, as well as JQueryUI for some of the user controls, and the Google Visualization API (aka Google Charts) for the graphs. Google's Visualization API is full-featured, although a note of caution: I am using their new "Material" look, which works better on mobile and looks nicer too than their regular "Classic" look - but it is still very much in development - it is missing quite a few features of the older version, and sometimes requires the use of configuration converters - so check Google's documentation carefully. However, it produces nicer looking charts of the data, a trade-off that I decided it was worth making:

These are just some of the techniques I used for Tube Heartbeat, and I only scratched at the surface of the HERE APIs, there are all sorts of interesting ones I could additionally incorporate, including some you might not expect, such as a Weather API.

Ordnance Survey have this week released four new additions to their Open Data product suite. The four, which were announced earlier this month, are collectively branded as OS Open and include OS Open Map Local, which, like Vector Map District (VMD), is a vector dataset containing files for various feature types, such as building polygons and railway stations. The resolution of the buildings in particular is much greater than VMD – surprisingly good, in fact. I had expected the data to be similar in resolution to the (rasterised) OS StreetView but it turns out it’s even more detailed than that. The specimen resolution for OS Open Map Local is 1:10000, with suggested uses down to a scale of 1:3000, which is really quite zoomed in. Two new files in OS Open Map Local are “Important Buildings” (universities, hospitals etc) and “Functional Areas” which outline the land containing such important buildings.

Above: Comparing the building polygon detail in the older Vector Map District (top left), previously the largest scale vector building open data from Ordnance Survey, and the brand new OS Open Map Local (top right). The new data is clearly much higher resolution, however one anomaly is that roads going under buildings no longer break the buildings – note the wiggly road in the centre of the top left sample, Malet Place, which runs through the university and under a building, doesn’t appear in full on the right. Two other sources of large-scale building polygons are OS StreetView (bottom left), which is only available as a raster, and OpenStreetMap (bottom right). The OS data is Crown Copyright and Database right OS, 2015. The OSM data is Copyright OSM contributors, 2015.

The other three new products, under the OS Open banner, are OS Open Names, OS Open Rivers and OS Open Roads. The latter two are topological datasets – that is, they are connected node networks, which allow routing to be calculated. OS Open Names is a detailed gazetteer. These latter three products are great as an “official”, “complete” specialised dataset, but they have good equivalents on the OpenStreetMap project. OS Open Map Local is different – it offers spatial data that is generally much higher in accuracy than most building shapes already on OpenStreetMap, including inward facing walls of buildings which are not visible from the street – and so difficult for the amateur mapper to spot. As such, it is a compelling addition to the open data landscape of Great Britain.

An encouraging announcement from BIS (the Department for Business, Innovation and Skills) a few days ago regarding future Open Data products from the Ordnance Survey (press release here) – two pieces of good news:

The OS will be launching a new, detailed set of vector data as Open Data at the end of this month. They are branding it as OS OpenMap, but it looks a lot like a vector version of OS StreetView, which is already available as a raster. The key additions will be “functional polygons” which show the boundaries of school and hospital sites, and more detailed building outlines. OS Vector Map District, which is part of the existing Open Data release, is already pretty good for building outlines – it forms the core part of DataShine and this print, to name just two pieces of my work that have used the footprints extensively. With OpenMap, potentially both of these could benefit, and we might even get attribute information about building types, which means I could filter out non-residential buildings in DataShine. What we do definitely get is the inclusion of unique building identifiers – potentially this could allow an crowd-sourced building classification exercise if the attribution information isn’t there. OpenMap also includes a detailed and topological (i.e. joined up under the bridges) water network, and an enhanced gazetteer, i.e. placename database.

The other announcement relates to the establishment of an innovation hub in London – an incubator for geo-related startups. The OS are being cagey about exactly where it will be, saying just that it will be on the outskirts of the Knowledge Quarter, which is defined as being within a mile of King’s Cross. UCL’s about a mile away. So maybe it will be very close to here? In any case, it will be somewhere near the edge of the green circle on the (Google) map below…

p.s. The Ordnance Survey have also recently rebranded themselves as just “OS”. Like University College London rebranding itself as “UCL” a few years ago, and ESRI calling itself Esri (and pronouncing it like a word), it will be interesting to see if it sticks. OS for me stands for “open source” and is also very close to OSM (OpenStreetMap), so possible confusion may follow. It does however mean a shorter attribution line for when I use OS data in my web maps.

Various websites I’ve built, and mentioned here on oobrien.com from time to time, are down from Friday at 5pm until Monday noon (all times GMT), due to a major power upgrade for the building that the server is in.

This affects the following websites:

DataShine

CDRC

Bike Share Map

Tube Tongues

OpenOrienteeringMap (extremely degraded)

Some other smaller visualisations

However the following are hosted on different servers and so will remain up:

The book acts both as a reference guide to the field and as a guide to help you get to know aspects of it. Each chapter includes a worked example with step-by-step instructions.

Each chapter has a different author, and includes topics such as spatial data visualisation with R, agent-based modelling, kernel density estimation, spatial interaction models and the Python Spatial Analysis library, PySAL. With 18 chapters, the book runs to over 300 pages and so has the appropriate depth to cover a diverse, active and fast-evolving field.

I wrote a chapter in the book, on open source GIS. I focused particularly on QGIS, as well as mentioning PostGIS, Leaflet, OpenLayers (2) and other parts of the modern open source “geostack”. My worked example describes how to build a map, in QGIS, of London’s railway “not-spots” – places which are further than a mile from a railway station, using open data map files, mainly from the Ordnance Survey. With the guide, you can create a map like the one below:

The book has only just been published and I was able to slip in brand new screenshots (and slightly updated instructions) just before publication, as QGIS 2.6 came out late last year. So, the book is right up to date, and as such now is a great time to get your copy!

Here is a guide to clone a WordPress(.org) blog, on the same server, 10 steps, on Linux, You’ll definitely need admin access to the blog itself, and probably to the database and server too, depending on your setup. I did this recently as I needed a copy of an existing production site, to hack on. If you don’t fancy doing it the quick-and-dirty way, there are, I’m sure, even quicker (and cleaner) ways, by installing plugins.

In the following instructions, substitute X and Y for your existing and new blog, respectively.

0. Do a backup of your current website, like you do normally for an upgrade or archiving, in case anything goes wrong. e.g. under Tools > Export in the WordPress admin interface.

5. Edit wp_Y_options:
Edit the option_value for rows with option_name values of siteurl and home, pointing them to the new location – mine are the same but one might be different, e.g. if you have your WordPress core files in a subdirectory relative to the directory for the site entry-point on the web.

(You can edit the affected rows manually, but I had a lot to do – there’s around 5 for each user.)

7. Drop force-upgrade.php in the same directory as wp-config.php and run it from your browser. This rebuilds caches/hashes stored in some of the tables. You can run it repeatedly if necessary, (e.g. if you missed a step above), it shouldn’t do any harm.