In this article we will implement a solution with Elasticsearch from scratch. No prior knowledge of Elasticsearch is required. The installation of Elasticsearch is however skipped as I will be using a cluster hosted at Found. A local installation is described at elastic.co/guide or you can sign up for a free cluster. Where appropriate I will try to rather do gradual refinement than presenting an advanced, and possibly overengineered, solution right away. This implies that not all examples are the best practice way of doing things and the experience reader probably will spot a few optimizations right away, but as time elapses and more data is accumulated I plan to address the issues in further articles, one low hanging fruit at a time.

Oslo is one of many cities that have so called city bikes, or bikes for hire. The system works like this: a user walks to a nearby bike rack and unlocks a bike using his card. He may return the bike at any rack in Oslo. To assist users finding a nearby rack with free bikes or locks, the system provides the status of every rack through a webpage or custom smartphone app. In a city like Oslo there is one curious problem with this system: people tend to prefer using bikes downhill. This results in a congestion of bikes in the city centre (literally downtown!). Anyway, this is my theory. I know for a fact that the system operator use trucks for picking up and dropping off bikes. What I don’t know is whether the purpose is to transport the bikes to and from the workshop or to alleviate congestion. My objective for this case study is to explore this using Elasticsearch. Mainly there are two reasons for exploring this. Firstly to satisfy my curiosity and secondly to take a crack at estimating when racks get congested or depleted. The current applications are able to find the nearest available bike or lock, but they are not able to estimate the likelihood of the bike or lock remaining available by the time you get there.

The operator’s city bike webpage contains a map of all of their bike racks and their statuses. Every bike rack has a name, a position, a bike count and a free locks count. In Scala I express this as a case class:

Using TagSoup it’s pretty straightforward to extract the required information from the webpage. TagSoup is a lenient html-parser that uses a best-effort approach for treating any html as xhtml. Combining TagSoup and Scala’s xml-support we can write a parser like this:

For strings and numbers the formatting is native to JSON, but dates and geopositions are a bit trickier. Elasticsearch carefully tries to detect the contents of json strings when new fields are processed. If one formats dates according to one of the standard formats then no specific mapping is required.

Note the extraction of weekday and hour of day into separate fields. This is redundant, but it allows for greater flexibility when building queries that treat time as recurring events rather than a straight continuum.

Elasticsearch is now ready to receive data from our parser. All we have to do is to invoke it regularly over a period of time so we can start trend analysis.

Now the fun begins. Don’t worry if you don’t have much data in your cluster yet, we will start with some basic queries. Our first example is to calculate the average number of bikes for a given rack. We can do this by using the statistical facet and a simple match query. The match query retrieves all the documents that are named: “92-Blindernveien 5” and the statistical facet calculates the average of the bicycle field for all the documents retrieved.

For this demo I have been running the parser every five minutes for more than two weeks, and the total number of bike rack observations in the index is around 54 000. Executing the above query produces a result like this:

In essence, the result consists of three parts. A metadata section, a hits section and a facets section. In the hits section we get the total number of documents that matched the query section of our query. In this case a total of 4878 bike rack observations. Unless you specify otherwise, Elasticsearch will include the ten highest ranked documents in the nested hits key. For this query the interesting part is in the facets section. The stat1 key holds the result of our stat1 facet. From this section we see that the average is 17,62 bikes. Combining the average with a standard deviation of 6,35 we can deduce that this bike rack was not empty in 95% of the observations, but the minimum value of 0 tells us that this rack have been observed as depleted in at least one observation.

When I leave home for work in the morning I’m often running a bit late, so every second counts. I have the option of walking to the closest rack or taking the bus. As it happens, it’s actually faster to go by bike than taking the bus, but the bike rack and the bus stop are located in opposite directions. I therefore use my mobile phone to check the status of the rack. As a matter of fact, I cannot remember the last time the rack was empty in the morning. This begs the question: can Elasticsearch prove that the probability of the rack being empty when I leave for work is very little? With that information at hand I could save those precious seconds it takes to check the current status of the rack. Let’s further refine our query and only consider observations between 09:00 and 10:00.

The bool query allows us to define several queries and how Elasticsearch should join their results. In this case we require a match in both queries. This time the statistical facet gave us the following result:

The minimum observed number of bikes is 10, the mean is 22.14 and the standard deviation is 3,80. Based on these figures we can conclude that the rack has never been depleted between 9:00 and 10:00 in the observed time and is not likely to become depleted at that time in the near future.

To better understand the trends of this particular bike rack we can use the terms_stats facet. The terms_stats facet is similar to the statistical facet, but requires specification of a key field, which it uses to group the documents by, calculate and calculate statistics for every term in that field. Using the terms_stats facet our query looks like this:

The above result is great, but isn’t it a bit odd that a numeric field is ordered alphabetically? The explanation is this: the terms_stats facet works on strings, and a bug in the first version of my parser ended up with Elasticsearch mapping hourOfDay and dayOfWeek as strings. Of course, the best solution for cleaning up the mess is to reindex the data, but what if reindexing is not feasible and the fields were not indexed in the first place? What we want a more granular resolution? The histogram facet is designed to work on numerals and allows for specification of bucket size at query time.

By using the key_script and value_script attributes in the histogram facet, it allows us to extract proper numerals at query time:

You might have noticed that I omitted the curl command in the histogram examples. This has nothing to do with the histogram facet in particular, but the fact they use single quotes (’) in the scripts which would have to be escaped. To run such queries I recommend saving them to a file and using the following command:

In this article we have seen the flexibility of Elasticsearch as a data analysis tool. Go ahead and create a small script and start shoving in some JSON documents, then take it from there. You will probably soon find the need to do some mapping and tweaking of your indexer, and yes, Elasticsearch lets you do that. If there are breaking changes in your mappings, simply create a new index and when it looks good you can create a script to reindex the documents from the old index. The facets are not as flexible as traditional SQL, but they sure are fast, and once you get your head around them they can actually deliver a lot.