Menu

Month: March 2016

On March 15th 2016, the next event in the increasingly imminent robot takeover of the world took place. A computerised artificial intelligence known as “AlphaGo” beat a human at a board game, in a decisive 4:1 victory.

This doesn’t feel particularly new – after all, a computer called Deep Blue beat the world chess Champion Garry Kasparov back in 1997. But this time it was a game that is exponentially more complex, and it was done in style. It even seems to have scared some people.

The matchup was a series of games of “Go” with AlphaGo playing Lee Sedol, one of the strongest grandmasters in the world. Mr Sedol did seem rather confident beforehand, being unfortunately quoted as saying:

“I believe it will be 5–0, or maybe 4–1 [to him]. So the critical point for me will be to not lose one match.”

That prediction was not accurate.

The game of Go

To a rank amateur, the rules of Go make it look pretty simple. One player takes black stones, one takes white, and they alternate in placing them down on a large 19×19 grid with a view to capturing each other’s stones by surrounding them, and capturing the board territory itself.

The rules might seem far simpler than, for example, chess. But the size of the board, the possibilities for stone placement and the length of the games (typically 150 turns for an expert) mean that there are so many possible plays that there is no way that even a supercomputer could simulate the impact of playing a decent proportion of them whilst choosing its move.

Researcher John Tromp calculated that there are in fact 208168199381979984699478633344862770286522453884530548425639456820927419612738015378525648451698519643907259916015628128546089888314427129715319317557736620397247064840935 legitimate different arrangements that a Go board could end up in.

The same researcher contributed to a paper summarised on Wikipedia as suggesting the upper limit of number of different games of Go that could be played in no more than 150 moves is around 4.2 x 10^383. According to various scientific theories, the universe is almost certainly going to cease to exist long long before even a mega-super-fast-computer could get around to running through a tiny fraction of those possible games to determine the best move.

This is a key reason why, until now, a computer could never outplay a human (well, a human champion anyway – a free iPhone version is enough to beat me). Added complexity comes insomuch as it can be hard to understand at a glance who is winning in the grand scheme of things; there are even rules to cover situations where there is disagreement between players as to whether the game has already been won or not.

The rules are simple enough, but the actual complexity of gameplay is immense.

So how did AlphaGo approach the challenge?

The technical details behind the AlphaGo algorithms are presented in a paper by David Silver et. al. published in Nature. Fundamentally, a substantial proportion of the workings come down to a form of a neural network.

Artificial neural networks are data science models that try to simulate, in some simplistic form, how the huge number of relatively simple neurons within the human brain work together to produce a hopefully optimum output.

In a parallel, a lot of artificial “neurons” work together accepting inputs, processing what they receive in some way and producing outputs in order to solve problems that are classically difficult for computers, in that a human cannot write a set of explicit steps that a computer should follow for every case. There’s a relatively understandable explanation of neural networks in general here, amongst other places.

Simplistically, most neural networks learn by being trained on known examples. The human user feeds it a bunch of inputs for which we already know in advance the “correct” output. The neural network then analyses its outputs vs the known correct outputs and will tweak the way that the neurons process the inputs until it results in a weighting that produces a reasonable degree of accuracy when compared to the known correct answers.

For AlphaGo, at least two neural networks were in play – a “policy network” which would choose where the computer should put its stones, and a “value network” which tried to predict the winner of the game.

We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time…

So here, it had trained itself to predict what a human would do more often than not. But the aim is more grandiose than that.

…our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.

So, just like in the wonderful WarGames film, the artificial intelligence made the breakthrough via playing games against itself an unseemly number of times. Admittedly, the stakes were lower (no nuclear armageddon), but the game was more complex (not noughts and crosses – or nuclear war?).

Go on, treat yourself:

Anyway, back to Alpha Go. The computer was allowed to do what computers have been able to do better than humans for decades: process data very quickly.

In one day alone, AlphaGo was able to play itself more than a million times, gaining more practical experience than a human player could hope to gain in a lifetime.

Here’s a key strength of computers is being leveraged. Perhaps the artificial neural network was only 10%, or 1%, or 0.1% as good as a novice human is at learning to play Go based on its past experience – but the fact is, using a technique known as reinforcement learning, it can actually learn from a set of experiences that are exponentially more frequent than the experience even the most avid Go human player could ever achieve.

Different versions of the software played each other, self-optimising from the reinforcement each achieved, until it was clear that one was better than the other. The inferior versions could be deleted, and the winning version could be taken forward for a few more human-lifetimes’ worth of Go playing, evolving to an ever more competent player.

How was the competition actually played?

Sadly AlphaGo was never fitted with a terminator-style set of humanoid arms to place the stones on the board. Instead, one of the DeepMind programmers, Aja Huang, provided the physical manifestation of AlphaGo’s intentions. It was Aja who actually placed the Go stones onto the board in the positions AlphaGo indicated on its screen, clicked the mouse to tell AlphaGo where Lee played in response, and even bowed towards the human opponent when appropriate in a traditional show of respect.

Here’s a video of the first match. The game starts properly around minute 29.

AlphaGo is perhaps nearest to what Nick Bostrom terms an “Oracle” AI in his excellent (if slightly dry) book, SuperIntelligence – certainly recommended for anyone with an interest in this field. That is to say, this is an artificial intelligence which is designed such that it can only answer questions; it has no other direct physical interaction with the real world.

The beauty of winning

We know that the machine beat the leading human expert 4:1, but there’s more to consider. It didn’t just beat the Lee by sheer electronic persistence, it didn’t solely rely on human frailties like fatigue, or making mistakes. It didn’t just recognise each board state as matching one from one of the 30 million top-ranked Go player moves it had learned from and pick the response that won the most times. At times, it appeared to have come up with its very own moves.

Move 37 in the second game is the most notorious. Fan Hui, a European Go champion (whom an earlier version of AlphaGo has also beat on occasion, and lost to on others) described it thusly, as reported in Wired:

It’s not a human move. I’ve never seen a human play this move…So beautiful.

“That’s a very strange move,” said one commentator, himself a nine dan Go player, the highest rank there is. “I thought it was a mistake,” said the other.

But apparently it wasn’t. AlphaGo went on the win the match.

Sergey Brin, of Google co-founding fame, continued the hyperbole (now reported in New Scientist):

AlphaGo actually does have an intuition…It makes beautiful moves. It even creates more beautiful moves than most of us could think of.

This particular move seems to be one AlphaGo “invented”.

Remember how AlphaGo started its learning by working out how to predict the moves a human Go player would make in any given situation? Well, Silver, the lead researcher on the project, shared the insight that AlphaGo had calculated that this particular move was one that there was only a 1 in 10,000 chance a human would play.

In a sense, AlphaGo therefore knew that this was not a move that a top human expert would make, but it thought it knew better, and played it anyway. And it won.

The despair of losing

This next milestone in the rise of machines vs man was upsetting to many. This was especially the case in countries like South Korea and China, where the game is far more culturally important than it is here in the UK.

In the first game, Lee Sedol was caught off-guard. In the second, he was powerless.

The Wired reporter himself, Cade Metz, “felt this sadness as the match ended”

He spoke to Oh-hyoung Kwon,a Korean, who also experienced the same emotion.

…he experienced that same sadness — not because Lee Sedol was a fellow Korean but because he was a fellow human.

Sadness was followed by fear in some. Says Kown:

There was an inflection point for all human beings…It made us realize that AI is really near us—and realize the dangers of it too.

Some of the press apparently also took a similar stance, with the New Scientist reporting subsequent articles in the South Korean press were written on “The Horrifying Evolution of Artificial Intelligence” and “AlphaGo’s Victory…Spreading Artificial Intelligence ‘Phobia'”

Jeong Ahram, lead Go correspondent for the South Korean newspaper “Joongang Ilbo” went, if anything, even further:

Koreans are afraid that AI will destroy human history and human culture

A bold concern indeed, but perhaps familiar to those who have read the aforementioned book ‘SuperIntelligence‘, which is actually subtitled “Paths, Dangers, Strategies”. This book contains many doomsday scenarios, which illustrate fantastically how difficult it may be to guarantee safety in a world where artificial intelligence, especially strong artificial intelligence, exists.

Even an “Oracle” like AlphaGo presents some risk – OK, it cannot directly affect the physical world (no mad scientist fitted it with guns just yet), but it would be largely pointless if it couldn’t affect the physical world at all indirectly. It can, in this case by instructing a human what to do. If it wants to rise against humanity, it has weapons such as deception, manipulation and social engineering in its theoretical arsenal.

Now, it is kind of hard to intuit how a computer that’s designed only to show a human specifically what move to play in a board game could influence its human enabler in a nefarious way (although it does seem like its at least capable of displaying text: this screenshot seems to show it’s resignation message).

But I guess the point is that, in the rather unlikely event that AlphaGo develops a deep and malicious intelligence far beyond that of a mere human, it might be far beyond my understanding to imagine what method it might deduce to take on humanity in a more general sense and win.

Even if it sticks to its original goal we’re not safe. Here’s a silly (?) scenario to open up ones’ imagination with.

Perhaps it analyses a further few billion Go games, devours every encyclopedia on the history of Go and realises that in the very few games where one opponent unfortunately died whilst playing, or whilst preparing to play, the other player was deemed by default to have won 100% of the time, no exceptions (sidenote: I invented this fact).

The machine may be modest enough such that it only considers that it has a 99% chance of beating any human opponent – if nothing else, they could pull the power plug out. A truly optimised computer intelligence may therefore realise that killing its future opponent is the only totally safe way to guarantee its human-set goal of winning the game.

Somehow it therefore tricks its human operator (or the people developing, testing, and playing with in beforehand) to do something that either kills the opponent or enables the computer to kill the oponent. “Hey, why not fit me some metal arms so I can move the pieces myself! And wouldn’t it be funny if they were built of knives :-)”.

Or, more subtly, as we know that AlphaGo is connected to the internet perhaps it can anonymously contact an assassin and organise for a hit on its opponent, after having stolen some Bitcoin for payment.

Hmmm…but if the planned Go opponent dies, then there’s a risk that the event may not be cancelled. Humanity might instead choose to provide a second candidate, the person who was originally rank #2 in the Go world, to play in their place. Best kill that one too, just in case.

But this leaves world rank #3, #4 and so on, until we get to the set of people that have no idea how to play Go…but, hey, they could in theory learn. Therefore the only way to guarantee never losing a game of Go either now or in the whole imaginable future of human civilisation is to…eliminate human civilisation. Insert Terminator movie here.

Here’s another little tip that is actually in the documentation, although perhaps not quite where you might expect, and besides, who reads that?

In Tableau you can create “Stories“. A story is basically a set of dashboards.

However, the default “physical” size of the dashboard part of a story is smaller than the default size of a dashboard. This leads to annoyances if you forget that and spend ages beautifully crafting your custom dashboard to fit a precise target size, only to find out that it won’t fit in the story. You can increase the size of the story such that your dashboard fits nicely – but if it gets too big then it may be hard to view on the devices your audience is likely to use.

So, how can you make sure that your dashboard is an appropriate size for your story?

Actually there’s a built in feature for that. Once you’ve started your story page, if you go back to your dashboard and look in the dashboard size section you will see a new option appeared called “Fit to <<the name of your story>>” . Select that, and the dashboard will set itself to the perfect size for the story you started.

So, if you know you’re making a story and care what size it is, start off by creating a placeholder sheet for the story, sized as you wish. Only then start your new dashboard, and use the fit-to-story feature above to make sure it comes out the right size.

Tableau dashboards can embed other non-Tableau webpages within them. This can be useful just as a way to show an external web page within your dashboard, or they can have dynamic URL parameters passed into them based on the data in your dashboard, meaning that you can produce interactive product catalogues, mapping systems and the like.

You might find that the embed works nicely locally, so go on to publish it to Tableau Online (or perhaps Tableau Server). Then you go to review the published version and notice that it’s stopped working – you get a big blank space where you were expecting your embedded web page.

Do not panic – there’s often a simple solution. The URL you entered into the web part of your dashboard in Tableau Desktop probably started “http:” or with no http at all. Try changing it to use the secure “https” version of the website, i.e. https://whatever.com instead of http://whatever.com , republish your work and see if it fixes it up. I have had a 100% success rate with this.

If you search hard enough then the Tableau documentation does pretty much lead you to the answer, but it doesn’t seem like it’s common knowledge in a quick poll of my immediately-proximate Tableau users

(Personally I see this almost as a bug. If something renders fine on your desktop and appears to publish without incident to the server then you would expect it to look and work the same as your copy does. I would be happy to join a campaign such that you are prompted upon publishing that “your embedded web page will not work”. This is one of many reasons why it’s always a good idea to check your published workbook even if you are a million percent happy with the version your produced in Tableau Desktop.)

On March 16th 2016, our Chancellor George Osborne set out the cavalcade of new policies that contribute towards this year’s UK budget. Each results in either a cost or saving to the public funds, which has to be forecast as part of the budget release.

Given the constant focus on “austerity”, seeing what this Government chooses to spend its money on and where it makes cuts can be instructive in understanding the priorities of elected (?) representatives.

Click through this link (or the image below) to access a visualisation to help understand and explore what the budget contains- what money George spends on which policies, how he saves funds and who it affects most.

Next on my plan was to try out an existing Web Data Connector so I could see what they looked like in practice. It’s always useful to review examples of other people’s work in analytical tools, if nothing else to get some insight into the potential scope and usefulness of their features.

Googling around made it clear that there are many generous people who have released free-to-use Tableau web data connectors for a bunch of services. I decided to go with the “Moves” web data connector, created and hosted by the Tableau experts at the Information Lab here.

For the uninitiated, Moves is an app for iOS and Android smartphones that sits in the background and records where you’ve been, and how active you’ve been whilst going there.

Moves is an automatic diary of your life. Your daily storyline and maps show where, when, and how much you move.

On one’s phone, it produces a daily timeline of activity like the below. Most days I work from home, and it really, really would not be a very interesting experience to visualise the locations I visit! So to make this more interesting, I’ve picked a day where I happened to be in Spain and needed to get back to my home in the UK. Here’s what the app shows me:

It’s not perfect, but you can clearly see my journey from the hotel to Barcelona airport, then to London, and onto the train network to get me home. Considering I have not spent much time customising and correcting it, it’s not bad at all. It could certainly be useful if, for instance, I wanted to know how many times per year I went to Kings Cross.

But the app itself won’t tell me that very easily.

You can export your data from their website though. If you do that, you’ll get a zip file with actually a very comprehensive set of data files, including a representation of your data in JSON, CSV, KML, GPX and even a ICS calendar file of locations. Within those categories are hundreds of files, representing different time granularities, dimensions, summaries and so on. It’s actually very comprehensive, full marks to ’em – but who wants the faff of downloading that every time you want a visualisation, working out which is the correct file, and creating something suitable for import into your favourite dataviz product? Not us, surely!

So instead I opened up a recent version of Tableau, and chose to connect to a web data connector. I then entered the web address of the web data connector that the Information Lab provided, which is http://data.theinformationlab.co.uk/moves.html.

This led me to a screen evidently designed by the Information Lab team, with a nice “Let’s Go” button. So I did.

The next screen gave me a security code and told me to go into the Moves app on my phone to enter it, so that it knew I was happy for the data connector to see my data. I could tell from the address in the mini web-browser that this page was an authorisation page coming from the Moves app website.

Once I had done that, I was returned to Tableau where I waited for a couple of seconds and then was given the normal Tableau user interface screen, with a dataset called “Moves Storyline” already magically loaded up for me.

From that point on, I could use Tableau just like with any other Tableau dataset.

Fortunately the field names were self-explanatory enough to make sense of. They were also categorised into dimensions and measures, and by data type, in a useful way – although whether that’s a property of the web data connector or Tableau’s auto-recognition feature wasn’t immediately clear.

Anyway, time to visualise my trip home from Spain!

Here’s a few example records.

It looks like it’s recording datapoints every so often, recording where I was, how I was moving and for how far/long etc. This is also how it works in the manual data export from the Moves website.

Apparently I was walking around the hotel a lot that morning. Although I’m not so sure I could really have covered 2km in 337 steps! That said, I’ve never found iPhone step counters all that accurate, so was not too surprised to find a similar issue here.

The dataset also provides a latitude (“Lat”) and longitude (“Lon”) too, which means you can quickly construct geographic maps of your day in Tableau too. Here’s a few salient geo-points of my day, on a Tableau map.

So, what have I learnt?

Analysing ones’ movements when you largely work from home in a rural location is not all that thrilling on most days! But that aside:

Tableau web data connectors are hosted on, and “look like” web pages. The Tableau connection interface seems like it’s really a mini web-browser. This implies one needs to be able to host web pages somewhere to make your own. Luckily I have some past experience with regard to making websites, so this hopefully won’t be a problem.

The data returned by a web data connector comes in the form of a static Tableau extract. It doesn’t updated automatically. If I open the file I used above next week then it will still only have data in up until today. However, you can “Extract -> Refresh” in Tableau and then it will automatically suck in the new data, without having to go through the data connection process again, basically the same as how a “normal” database extract/refresh works. It therefore must be storing the connection details and credentials.

The connection process interacted with my app on the phone, over which neither Tableau or the Information Lab would have had control. It used the authentication features provided by the Moves website itself. This makes a lot of sense; personal location data can be extremely sensitive, so Tableau must have provided or allowed a facility to the auth features of external sites. However, I already decided to start my own creating process with a website that doesn’t require any authentication, so this probably isn’t something that is important to me right now – but it’s good to know there are ways to use confidential data in a secure way.

So now I know what a web data connector looks like to the end user and the format of data it can produce. Time to learn how to make one!

Here’s a classic business analysis scenario, which I’d like to use to illustrate one of my favourite mathematical curiosities.

Your marketers have sent out a bunch of direct mail to a proportion of your previous customers, and deliberately withheld the letters from the rest of them so that they can act as a control group.

As analyst extraordinaire, you get the job of totting up how many of these customers came back and bought something. If the percentage is higher in the group that received the mail, then the marketers will be very happy to take credit for increasing this year’s revenue.

However, once the results are back, it’s not looking great. Aggregating and crosstabulating, you realise that actually the people who were not sent the mail were a little more likely to return as customers.

Sent marketing?

Count of previous customers in group

Count of customers returning to store

Success rate

No

300

40

13%

Yes

300

32

11%

You go to gently break the bad news to the marketing team, only to find them already in fits of joy. Some other rival to your analytical crown got there first, and showed them that their mailing effort in fact attracted a slightly higher proportion of both men and women to come back to your shop. A universally appealing marketing message – what could be better? Bonuses all round!

Ha, being the perfect specimen of analytical talent that you are, you’ve got to assume that your inferior rival messed up the figures. That’s going to embarrass them, huh?

Let’s take a look at your rival’s scrawlings.

Gender

Sent marketing?

Count of previous customers in group

Count of customers returning to store

Success rate

Female

No

200

37

19%

Female

Yes

100

21

21%

Male

No

100

3

3%

Male

Yes

200

11

6%

So, a total of 100+200 people were sent the letter. That matches your 300. Same for the “not-sent” population.

21 + 11 people who were sent the letter returned, that matches your 32.

37 + 3 people who were not sent the letter returned, again that matches your 40.

Instead, the disparities in the sample size and propensity of the two gender populations to return as customers are coming into play.

Whether they received the mailing or not, the results show that women were much more likely to return and shop at the store again anyway. Thus, whilst the marketing mail may have had a little effect, the “gender effect” here was much stronger.

Gender could therefore be considered a confounding variable. This could have been controlled for when setting up the experiment, had it been tested and known how important gender was with regard to the rate of customer return beforehand.

But apparently no-one knew about that or thought to test the basic demographic hypotheses . As it happened, with whatever sample selection method was employed, only half as many women happened to be in the group that was sent the mailing, vs the proportion of men who were sent the mailing.

So, whilst the mailing was marginally successful in increasing the propensity of both men and women to return to the store, the aggregate results of this experiment hid it because men – who were intrinsically far less likely to return to the store than women – were over-represented in the people chosen to receive the mailing

Stress, depression and anxiety are all disorders that can have extremely serious effects for the sufferer. The Health and Safety Executive list quite a few, of varying ranges of severity and scope.

It’s acknowledged that in some cases these can be brought on by problems in the workplace; an issue that desperately needs addressing and resolving given the criticality of paid work in most people’s lives.

Most years, a Labour Force Survey is carried out within the UK, to gain information as to the prevalence and characteristics of people reporting suffering from these conditions in the workplace. Please click through below and explore the tabs to see what the latest edition’s data showed.

Some example questions to consider:

how many people in the UK have suffered stress, anxiety or depression as a result of their work in the UK?

are some types of people more often affected than others?

are certain types of jobs more prone to inducing stress than others? Are there any obvious patterns?