Category: data visualization

These two projects are a result of recent collaboration with Transparency International Slovenia. The datasets were provided by the state, and I was asked to develop visualizations that would structure the information in an accessible way. Much help was also provided by members of Institut Jožef Štefan.

State project browser

The first project is a browser of all projects, initiated by state institutions, from 1991 on. The idea was to let users discover, where and for what purposes the money goes in their county. The dataset and visualization allow for exploration by various categories, as well as time.

The projects in the dataset also contain projects that are still in the planning phase, and won’t be completed until year 2025. With this tool, citizens can hopefully inspect the planned expenditures for roads, water sources, and other categories of infrastructure, culture and other fields of development, and compare that with their own expectations.

It allows browsing and filtering of projects by statistical regions and counties, as well as displaying the timeline of all projects, which is basically an expandable version of a Gantt chart.

To see the interactive project website, click here, or click the image below.

State projects app

The original data is provided on the project’s “About” page.

County budget browser

The new project is a straightforward visualization of county budgets. The budgets are displayed as dynamic, zoomable hierarchical (“sunburst”) diagrams. They react to each other, allowing a side-by-side comparison of budgets of two user-selected counties.

The visualization enables users to delve into expenses and incomes of all Slovenian counties on separate tabs.

To see the interactive project website, click here, or click the image below.

County budgets app

Technology and design

The data cleanup and preparation was done with some Python scripts. The sunburst diagram accepts hierarchical data in a tree format, so this provided an interesting exercise of converting a tabular dataset into a nested dictionary of optional depth.

The visualizations were done in d3, which is really an indispensable tool for any serious work in online visualization.

Both projects were minimalistically, yet expertly designed by Tomaž Plahuta (Bitnik, Eno).

This is just a short recap of the project that was awarded a Miguel Urabayen Award as the Best Map in printed media and a gold medal for a feature article at Malofiej24. The whole list of awarded projects is available on their website, our project is listed first, and then again under the Features / Reportajes heading. My colleagues – Aljaž Vesel, Ajda Bevc, Aljaž Vindiš and the graphics editor Samo Ačko – got two more awards, and I congratulate them sincerely. Read more about the award here. The article in dnevnik.si about the awards is here (Slovenian).

The project was my first collaboration with the Dnevnik newspaper for the Objektivno feature section, which mainly features various data visualizations. It was a done in a somewhat ad-hoc fashion for lack of anything else to do. I realized I’ve been scraping the site where the list of towed cars is published for the owners to check if the car suddenly disappears from a public parking in Ljubljana. The list doesn’t exist anymore, but it used to be on this page. It contained the car make and model, registration plate number, the location from where it was towed, and datetime stamp. We decided to put it all on the map, and analyze it a bit to see where the luxury makes are towed most.

Here’s the map printout from the newspaper. Click it for the PDF, or click this link.

dnevnik-spiders-net

It’s in Slovenian language, so for English speakers:

street segment thickness is for number of cars towed (legend top left)

color is for ratio between better and ordinary car makes – we arbitrarily decided what is “better”, but we generally considered more expensive cars, like Audi, Mercedes-Benz, etc. as better. Yellow is for uniform distribution, red is for slightly more better cars, blue for mostly better cars, and black for exclusively better cars. Circles denote regions where mostly better cars were towed. That usually happens in the center and around the new sports stadium.

on the bottom left there are some statistics, as well as the list of car makes we used.

on the bottom right there are some map cutouts of neuralgic points on the map with some commentary.

One wonders if owners of better cars are more prone to get parking tickets than owners of ordinary cars. I believe that is so, and the sad reason must be an inflated sense of self-importance, which translates in the said persons being convinced that the law doesn’t apply to them, leaving their shiny cars parked in inappropriate places. There’s another side to the story – the underpaid traffic wardens, who are all too happy to make a point by immediately calling the tow truck and ignoring the owners’ pleas even if they come before the towing itself. So there is a social undertone to this project, and I’m happy if the jury members realized this as they deliberated.

The whole project was done on Mapbox platform, except for street geocoding and geometry, which comes from my privately curated database, derived from public dataset, which is in turn managed by this public agency. Many thanks to Mapbox team for the turf.js library, which I used in node.js to properly annotate the geometry with numbers and calculate the ratios. The resulting geojson file was then imported into MapBox Studio, styled by the gifted designer Aljaž Vindiš, and prepared for print.

Some time ago, I released a much more comprehensive project with many visualizations of traffic infractions in Slovenia, which took me months to make, but failed to make any significant traffic or impact in public sphere.

The raw development version is still on my server, see it here. I forgot what I meant with the coloring, but I guess it’s the car make ratio.

The whole thing took us around two days to make. After that, we collaborated on a number of interesting projects, but sadly, as is inevitable in life, the merry group self-disbanded and left the newspaper for greener pastures. I’m looking forward to collaborating again with any of them.

This is a technical explanation of procedure to map parking infractions in Manhattan for every available car make. To see the interactive visualization, click here, or click the image below. Otherwise read on.

Heatmaps for Audi and Bentley

Last year I published an Android app to enable Slovenian drivers to better avoid areas frequently inspected by parking wardens. It works by geolocating the user and then plotting issued paring tickets in the vicinity, with a breakdown by month, time of day and temperature on another screen. It was not a huge hit, but it did reasonably well for such a small country and no marketing budget.

I was thinking of making a version for New York City, but then abandoned the project. These visualizations are all that remains of it.

I started with downloading the data from New York Open Data repository. It’s here. The data is relatively rich, but it’s not geocoded. Luck had it that Mapbox just rolled out a batch geocoder at that time, and it was free with no quotas. So I quickly sent around 100,000 adresses through it and saved the results in a database for later use. The processed result is now available on Downloads page in form of JSON files, one per car make.

The actual drawing procedure was easier than I thought. I downloaded street data from New York GIS Clearinghouse and edited out everything but Manhattan with QGis.

First I tried a promising matrix approach, but I was unable to rotate the heatmap so that it would make sense. Here’s an example for Audi:

Matrix – Audi

As you can see, it is a heatmap, but doesn’t look very good.

So I wrote a Python script that went through all street segments and awarded a point if there was an infraction closer that 100 meters from the relevant segment. Then I just used matplotlib to draw all the street segments, coloring them according to the maximum segment value.

A result for Audi now looks like this:

Audi

All that remained was drawing required images for animated GIFs, each for every hour for every car make. This was done with minimal modifications to original script (I learned Python multithreading in the process). The resulting images were then converted to animated GIFs with ImageMagic.

The whole procedure took approximately 12h of calculating and rendering time on a i7-6700 with 32 GB RAM. I guess I could shave several hours from that time, but I just let it run overnight.

This is a rework of the visualization I did for the Dnevnik newspaper. The Ljubljana government was generous enough to give us a location database with information on species and location of every tree within city limits. I thought it would be nice to render every species on its own map, so that the distributions can be compared.

Instead of just drawing points where each tree is, I calculated distance from each building to all the trees, and increased the building “score” if a tree was within 150 meters distance. Then I colored the buildings according to the score – the darker green it is, the more trees in its vicinity.

The aforementioned article depicted the areas with higher potential for causing allergenic reactions due to specific tree species that grow there, but it also has a detailed map with every building colored in proportion with its distance from trees in vicinity.

This post and maps were inspired by Moritz Stefaner’s -ach, -ingen, -zell. I firmly believe in giving credits to whom they are due, so there it is.

That said, I embarked into a similar adventure, first for Slovenia. Etimology of Slovenian towns and other populated places may differ a little from German one, so I was naturally curious what it would be like on a map. I had several geo files for Slovenia around, and also a comprehensive list of all populated places with coordinates, making this a relatively short endeavour.

In addition to common suffices suffixes, I also extracted common prefixes. This is because many Slovenian place names begin with “Gornja” (Upper) or “Velika” (Great), so I wanted to see if there are meaningful spatial distributions of these names. It turns out that they are.

For example, this one. By columns: “gornja” (a variant of “upper) vs “dolnja” (variant of “lower”), “zgornja” and “dolnja” (another couple of variations on the same dichotomy), the “velika” and “mala” (“great” and “small”). It’s apparent that places with those prefixes have characteristic spatial distributions. Why, I don’t know. Dialects of Slovenian language vary wildly, to the point that some of them are virtually incomprehensible to me.

To see the interactive version with more maps, click here, or click the image. Switch between prefixes and suffixes using links in the upper left square.

Distribution of places with some common prefixes

Having written the code and downloaded the geonames.org database, it was just a matter of changing a few things to produce a similar map of a similar distribution in the USA. I colored it a litlle differently, but it’s basically the same thing.

Again, click here or the image for interactive version. Note that you can click on a little link above each map to display the list of place names.

Then, a friend and coworker of mine said that he always wondered about the distributions of U.S. towns with borrowed names from European places. That would effectively show distributions of immigration in early history of USA, with exception of Spanish names, which tend to be on the Mexican border because of history, and some random noise in string matchings.

Check out the maps! Some technical details: the maps were drawn with d3, and hexagons produced with the hex-binning plugin.

Name matching was not a big challenge, but I did want to find unique suffixes. So I wrote some software to first isolate the most frequently occurring seven character suffixes, then I gradually shortened them until big dropoff, say, more than 50 places occurred. That way I prevented near duplicates to be included, for example “-ville”, “-ille”, “-lle”, which have approximate same distribution, so only one of them has a place on the map.

The biggest challenge was in fact generating a hex grid within the borders, and then fitting the data inside it. That’s the reason the pages need some time to load. I brute-forced that by generating points inside the bounding box and checking if withing the polygon in question with turf.js, then setting all hexagon lenghts to zero, and finally filling them with real data.