What is Machine Learning?

Words by Martin Kemka and Nicholas Sherwood

Illustrations by Mariah Arvanitakis

Even taken literally, machine learning sounds threatening to us non-machines. So, what is it and should we be worried?

Despite what you may think, the technological basis of machine learning is remarkably simple. Machine learning relies on vast amounts of data or "datasets" that are then read by an algorithm (a set of steps that perform a function). The algorithm, once it is shown enough data, can identify patterns. These patterns can be thought of as memories that are weighted and used to infer what the future could be, based on what it has seen before.

The internet is full of datasets and it is itself one of the largest datasets used for machine learning. Data science platform, Kaggle, is often a first point of call for data scientists to download the data samples that are used to build new algorithms. For example, they have been used to predict the future health of an anonymous patient or the survival likelihood of a Titantic victim. Kaggle’s datasets of publicly available information include almost everything; from the data released by NYC’s Citi Bike program (that could then be used to help predict collisions) to the ASL alphabet (that could be used to convert a video of sign-language to text). These data starting points, free and available to the public, form the foundation of machine learning and are often the basis for serious medical research, targeted advertising and community building.

Kaggle’s current success rides on its usability, but its origins are more akin to the simple MNIST database, a collection of 60,000 handwritten digits used to train image processing systems. At a consumer level, something like MNIST is used to test methods that could then be used to identify an object in a photo, like the face of a friend or a number on a street sign. Most new smartphones have facial recognition software that can be used to unlock the phone. This is possible because of the vast datasets available of faces, expressions, emotions and positions. These facial features are read by an algorithm, which then cues a machine to recognise your face, thereby unlocking the device.

But datasets didn’t used to be the darlings of machine learning. Their importance was evident, but their boundaries were not being pushed until Fei-Fei Li, an Associate Professor in Computer Science at Stanford University, tried a new tactic. Li’s famed ImageNet dataset was built on a logical hypothesis; the bigger the dataset, the better the results. For example, the more images of different kinds of flowers there are, the more likely an algorithm can accurately isolate a flower in a photo, simply because it has more photographs to reference.

Illustration by Mariah Arvanitakis.

Based on Princeton psychologist George Miller’s project WordNet in the late 1980s, ImageNet is a database of 14M+ images organised in key categories. The project began in 2006 and was launched in 2009 after three years of time consuming categorising, which resulted in a lot of skepticism over it’s efficiency and any viability, both commercially or scientifically, from the research. This initial skepticism was well and truly shattered once ImageNet launched an annual public competition to create an algorithm to predict the items within a photo with the highest accuracy. Cognitive psychologist and computer scientist Geoffrey Hinton won in 2012. Hinton was the first person to get ImageNet’s accuracy over 75% using a deep learning algorithm. This deep learning algorithm came from the invention of an ‘artificial neural network’, an algorithm based on how a brain works. It is a form of machine learning that includes a range of techniques, which allow a system to analyse data without sticking to an algorithm. This essentially means that the machine is making its own decisions. Since 2012, deep learning algorithms have become far more accurate, effectively solving ImageNet’s hypothesis and solidifying the possibility to create image recognition technology.

Machine learning’s innovative nature needed a new style of learning to keep up. The popularity of MOOCs, or massive open online courses, has risen steadily with machine learning’s presence thanks to Andrew Ng, a computer scientist who is also the co-founder of one of the largest online course providers, Coursera. Ng’s balance of courses from the world’s leading universities (including his own, Stanford) and those from companies like his startup Deeplearning.ai, teach millions of people worldwide via online correspondence. Ng’s machine learning course is helping to fill the demand of educated machine learners, or data scientists, now a coveted position worldwide. The need for talent in this sector is obvious due to the impact it can have. LinkedIn’s success can largely be attributed to its data scientist, Jonathan Goldman, who introduced the “People You May Know” feature to the platform. This feature had the highest click through rate the platform had ever seen and increased the strength of the connections on the platform.

The access to the data we create is often a double-edged sword though. We are largely unaware of the data we give away for free on Facebook, Instagram and anywhere else that leaves a digital paper trail; it is all collected and used. We may feel vulnerable in the face of data and at the mercy of our own ignorance, but the creative use of this data through machine learning could have a significant impact on our ability to solve some of the most pressing issues of our time. Machine learning is now being used to tackle malaria in developing nations by using image recognition algorithms on blood samples to improve the identification rates. Those same deep neural networks pioneered by Hinton’s ImageNet team, used to determine whether a flower is flower (or a certain flower is a rose), are now being used to save lives.

Andre Esteva, a PhD student at Stanford University is currently researching skin cancer. Using a dataset of 129,450 images, Esteva has trained a deep convolutional neural network to look for differences, at a pixel level, in different images of skin lesions. Its accuracy was set against the deadliest and most commonly found skin cancer. It was then compared with the findings of a board of dermatologists. The results showed that Esteva’s neural network was able to diagnose as accurately as a professional dermatologist. If in the future our smartphones are fitted with a neural network of their own, we’ll be able to access this same level of diagnostic care simply by downloading an app, uploading a photo and getting a result. In this way, machine learning will provide vital, instant and accurate information to everyone.

Illustration by Mariah Arvanitakis.

Martin Kemka is the founder of Northraine, a BCorp machine learning production house that develops models to recondition the human condition. Northraine is also committed to donating 20 percent of their work for pro-bono causes in the human rights and social space.

Subscribe

Subscribe

Email Address

Thank you!

Matters Journal—responsible business, sustainability, social impact.

Matters Journal acknowledges the Traditional Owners of the land on which we work, the Wurundjeri, Boonwurrung, Wathaurong, Taungurong and Dja Dja Wurrung people of the Kulin Nation and we pay our respects to their Elders past and present.