Human Risk in action

Recent news about WeWork, the office space provider, illustrates Human Risk on a number of levels.

By now, you've probably heard of WeWork (or "The We Company" as the parent company is called). "We" rents out shared working space and was due to float on the NY Stock Exchange, but has had to delay those plans.

To catch up on the whole story, I highly recommend this Business Insider article. You might think you know all the details, but I'm pretty sure there'll be something you weren't aware of that still manages to surprise you.

WeWork is symptomatic of the phenomenon of "Unicorns"; start-up companies that are (over?) valued in excess of $1bn. If like me, you're sceptical that these companies can all be worth that much, then meet Scott Galloway, a digital economy specialist who refers to WeWork as WeWTF and records highly entertaining and insightful video insights like this:
[Reader warning: contains NFSW language]

Bi-Weekly Cognitive Bias

Cognitive Biases are generally associated with Humans rather than Machines. As Excavating.AI, a recent art project illustrates, they're arguably more relevant when it comes to Artificial Intelligence (AI).

To help AI cope with the challenges of identifying and categorising people, Princeton and Stanford Universities have built Image-Net, a database of pictures tagged by people.

How the AI learns from this can produce unexpected and undesirable results. It led some researchers to create Image-Net Roulette; an art project that allowed people to upload photographs and see what the Image-Net trained AI saw in them.

Here's what it made of photos of a recent G7 leader's meeting and a famous scene from the White House situation room:

As these examples illustrate and this Guardian article nicely explains, the consequences of AI learning from biased human data are potentially severe. Having proved the point they set out to prove with ImageNet Roulette, the researchers have taken the site offline, but you can read more about the Excavating.AI project here.