Machine learning
essentially allows computer programs to change themselves when prompted by
external data or programming. By nature, it is able to accomplish this without
human interaction. It shares similar functionality with data mining, but with mined results to be
processed by machines rather than humans. It is divided into two major
categories: supervised and unsupervised
learning.

Supervised machine
learning involves the inference of predetermined operations through labeled
training data. In other words, supervised results are known in advance by the
(human) programmer, but the system inferring the results is trained to “learn”
them. Unsupervised machine learning, by contrast, draws inferences from
unlabeled input data, often as a means to detect unknown patterns.

Deep learning is
unique in its ability to train itself through hierarchical algorithms, as opposed to the linear
algorithms of machine learning. Deep learning hierarchies are increasingly
complex and abstract as they develop (or “learn”) and do not rely on supervised
logic. Simply put, deep learning is a highly advanced, accurate and automated
form of machine learning, and is at the forefront of artificial intelligence
technology.

Business Applications of Deep Learning

Machine learning is
already commonly used in several different industries. Social media,
for instance, uses it to curate content feeds in user timelines. Google Brain
was founded several years ago with the intent of productizing deep learning
across Google’s range of services as the technology evolves.

With its focus on predictive analytics,
the field of marketing is particularly invested in deep learning innovation.
And since data accumulation is what drives the technology, industries like
sales and customer support (which
already possess a wealth of rich and diverse customer data) are uniquely
positioned to adopt it at the ground level.

Early adaptation to
deep learning could very well be the key determining factor in how much
specific sectors benefit from the technology, especially in its earliest
phases. Nevertheless, a few specific pain points are keeping many businesses
from taking the plunge into deep learning technology investment.

The V’s of Big Data and Deep Learning

In 2001, an analyst
for META Group (now Gartner) by the name of Doug Laney outlined what
researchers perceived to be the three main challenges of big data: volume, variety and velocity.
Over a decade and a half later, the rapid increase in points of access to the
internet (due largely to the proliferation of mobile devicesand
the rise of IoT technology)
has brought these issues to the forefront for major tech companies as well as
smaller businesses and startups alike. (To learn more about the three v's, see Today's Big Data
Challenge Stems From Variety, Not Volume or Velocity.)

Recent statistics on
global data usage are staggering. Studies indicate that roughly 90 percent of
all of the world’s data was only created within the last couple of years.
Worldwide mobile traffic amounted to roughly seven exabytes per month over 2016, according
to one
estimate, and that number is expected to increase by about seven
times within the next half decade.

Beyond volume,
variety (the rapidly increasing diversity in types of data as new media evolves
and expands) and velocity (the speed at which electronic media is sent to data centers and hubs) are also major
factors in how businesses are adapting to the burgeoning field of deep
learning. And to expand on the mnemonic device, several other v-words have been
added to the list of big data pain points in recent years, including:

Validity: The measurt of input data accuracy in big data systems. Invalid data that goes undetected can cause significant problems as well as chain reactions in machine learning environments.

Vulnerability: Big data naturally evokes security
concerns, simply by virtue of its scale. And although there is great
potential seen in security systems that are enabled by machine learning,
those systems in their current incarnations are noted for their lack of
efficiency, particularly due to their tendency to generate false alarms.

Value: Proving the potential value of big data
(in business or elsewhere) can be a significant challenge for any number
of reasons. If any of the other pain points in this list cannot be
effectively addressed, then they in fact could add negative value to any
system or organization, perhaps even with catastrophic effect.

Other alliterative
pain points that have been added to the list include variability, veracity,
volatility and visualization – all presenting their own unique sets of
challenges to big data systems. And more might still be added as the existing
list (probably) tapers off over time. While it may seem a bit contrived to
some, the mnemonic “v” list encompasses serious issues confronting big data
that play an important role in the future of deep learning.

The Black Box Dilemma

One of the most
attractive features of deep learning and artificial intelligence is that both
are intended to solve problems that humans can’t. The same phenomena that is
supposed to allow that, however, also presents an interesting dilemma, which
comes in the form of what’s known as the “black box.”

The neural network
created through the process of deep learning is so vast and so complex that its
intricate functions are essentially inscrutable to human observation. Data scientists and
engineers may have a thorough understanding of what goes into deep learning
systems, but how they arrive at their output decisions more often than not goes
completely unexplained.

While this might not
be a significant issue for, say, marketers or salespeople (depending on what
they're marketing or selling), other industries require a certain amount of
process validation and reasoning in order to get any use out of the results. A
financial services company, for instance, might use deep learning to establish
a highly efficient credit scoring mechanism. But credit scores often must come
with some sort of verbal or written explanation, which would be difficult to form
if the actual credit scoring equation is totally opaque and unexplainable.

This problem extends
to many other sectors as well, notably within the realms of health and safety.
Medicine and transportation could both conceivably benefit in major ways from
deep learning, but also face a significant obstacle in the form of the black box.
Any output results in those fields, no matter how beneficial, could be wholly
discarded on account of their underlying algorithms’ complete obscurity. This
brings us to perhaps the most controversial pain point of them all…

Regulation

In the spring of 2016,
the European Union passed the General Data
Protection Regulation (GDPR), which (among other things) grants
citizens the “right to an explanation” for automated decisions generated by
machine learning systems that “significantly affect” them. Scheduled to take
effect in 2018, the regulation is causing concern among tech companies who are
invested in deep learning on account of its impenetrable black box, which would
in many cases obstruct explanation mandated by the GDPR.

The “automated
individual decision-making” that the GDPR intends to restrict is an essential
feature of deep learning. But concerns over this technology are inevitable (and
largely valid) when the potential for discrimination is so high and
transparency so low. In the United States, the Food and Drug Administration
similarly regulates the testing and marketing of drugs by requiring those
processes to remain auditable. This has presented obstacles for the
pharmaceutical industry, as has reportedly
been the case for Massachusetts-based biotechnology company
Biogen, which has been prevented from using uninterpretable deep learning
methods due to the FDA rule.

The implications of
deep learning (moral, practical and beyond) are unprecedented and, frankly,
quite profound. A great deal of apprehension surrounds the technology due in
large part to a combination of its disruptive potential and its opaque logic
and functionality. If businesses can prove the existence of tangible value
within deep learning that exceeds any conceivable threats or hazards, then they
could help lead us through the next critical phase of artificial intelligence.

Colyn Emery has worked in
digital media since 2007. A lifetime Southern California native, he worked in
video and broadcasting after earning his BFA in Creative Writing from Chapman
University. He went on to earn his MFA in Broadcast Cinema from Art Center
College of Design in 2012, and began working as a freelance writer in early
2015. Since then, he has written blogs and articles for several popular content
sites, and has copywritten for numerous brands and startups. Full Bio

Comments

Post a Comment

Popular posts from this blog

How Facebook Outs Sex Workers
By Kashmir Hill Yesterday 2:20pm
Leila has two identities, but Facebook is only supposed
to know about one of them.
Leila is a sex worker. She goes to great lengths to keep
separate identities for ordinary life and for sex work, to avoid stigma,
arrest, professional blowback, or clients who might be stalkers (or worse).
Her “real identity”—the public one, who lives in
California, uses an academic email address, and posts about politics—joined
Facebook in 2011. Her sex-work identity is not on the social network at all;
for it, she uses a different email address, a different phone number, and a
different name. Yet earlier this year, looking at Facebook’s “People You May
Know” recommendations, Leila (a name I’m using using in place of either of the
names she uses) was shocked to see some of her regular sex-work clients.
Despite the fact that she’d only given Facebook
information from her vanilla identity, the company had somehow discerned her
real-world con…

The 15 Most Influential Websites of All TimeAlex Fitzpatrick,Lisa Eadicicco,Matt Peckham Updated:
Oct 20, 2017 10:55 AM ET | Originally published: Oct 18, 2017 The
web, or "world wide web" as we used to say, turns 27 years old on
December 20. On that date, nearly three decades ago, British engineer and
scientist Tim Berners-Lee launched the world's first website, running on a NeXT
computer at CERN (the European Organization for Nuclear
Research) in Switzerland.

The website wasn't much at the time, just a few sentences
organized into topic areas that laid out the arguments for the concept. But it
established vital first principles still essential to the web as it exists
today: the notion of hyperlinks that reimagined documents (and eventually any
form of media) as nonlinear texts, and the ability for anyone, anywhere in the
world, to peruse that content by way of a browser: a piece of software that
cohered to universal formatting standards. It's been a wild ride since…

British supermarket offers 'finger vein' payment in
worldwide first
By Katie Morley, consumer affairs editor 20 SEPTEMBER
2017 • 1:04AM
A UK supermarket has become the first in the world to let
shoppers pay for groceries using just the veins in their fingertips.
Customers at the Costcutter store, at Brunel University
in London, can now pay using their unique vein pattern to identify themselves.
The firm behind the technology, Sthaler, has said it is
in "serious talks" with other major UK supermarkets to adopt hi-tech
finger vein scanners at pay points across thousands of stores.
It works by using infrared to scan people's finger veins
and then links this unique biometric map to their bank cards. Customers’ bank
details are then stored with payment provider Worldpay, in the same way you can
store your card details when shopping online. Shoppers can then turn up to the
supermarket with nothing on them but their own hands and use it to make
payments in just three …