Four ways to start incorporating AI and personalization into your marketing today

Since the invention of mass media, arguably, the primary focus of marketing has been to increase its level of personalization. Marketers constantly seek more targeted audiences, and strive to deliver messages that speak more directly to their diverse audiences. So, it's no surprise that AI and machine learning — with their ability to predict consumer behavior and make personalized recommendations on-the-fly — has captured the attention of the marketing world. But the elephant in the room is that advances in machine learning have far outpaced most marketers' ability to harness them

Unfortunately, this inability to personalize the customer experience is a huge missed opportunity. Customers now expect a tailored experience, including customized recommendations and a personal touch. And customers are willing to reward companies who provide it. According to Gartner, "By 2018, organizations that have fully invested in all types of personalization will outsell companies that have not by 20%." Even more alarming, customers are increasingly likely to dump brands that don't offer personalization. According to a 2016 Salesforce study: "… more than half (52%) of consumers are likely to switch brands if a company doesn't make an effort to personalize communications to them, 65% of business buyers say the same about vendor relationships."

How a new wave of machine learning will impact today’s enterprise

Advances in deep learning and other machine learning algorithms are currently causing a tectonic shift in the technology landscape. Technology behemoths like Google, Microsoft, Amazon, Facebook and Salesforce are engaged in an artificial intelligence (AI) arms race, gobbling up machine learning talent and start-ups at an alarming pace. They are building AI technology war chests in an effort to develop an insurmountable competitive advantage.

While AI and machine learning are not new, the current momentum behind AI is distinctly different today, for several reasons. First, advances in computing technology (GPU chips and cloud computing, in particular) are enabling engineers to solve problems in ways that weren’t possible before. These advances have a broader impact than just the development of faster, cheaper processors, however. The low cost of computation and the ease of accessing cloud-managed clusters have democratized AI in a way that we’ve never seen before. In the past, building a computer cluster to train a deep neural network would have required access to deep pockets or a university research facility. You would have also needed someone with a Ph.D. in mathematics who could understand the academic research papers on subjects like convolutional neural networks.

Five key takeaways for designing, building and deploying serverless applications in the real world

The term “serverless architecture” is a recent addition to the technology lexicon, coming into common use within the last year or so, following the
launch of AWS Lambda in 2014. The term is both quizzical and provocative. Case in point: while explaining the concept of serverless architecture to
a seasoned systems engineer recently, he literally stopped me mid-sentence—worried that I had gone insane—and asked: “You realize there is actual
hardware up there in the cloud, right?” Not wanting to sound crazy, I said yes. But secretly I thought to myself: “Yet, if my team doesn’t have to worry about server failures, then for all practical purposes, hardware doesn’t exist in the cloud—it might as well be unicorn fairy dust.” And that, in a nutshell, is the appeal of serverless architecture: the ability to write code on clouds of cotton candy, without a concern for the dark dungeons of server administration.

But is the reality as sweet as the magical promise? At POP, we put this question to the test when we recently deployed an app in production utilizing a serverless architecture for one of our clients. However, before we review the results, let’s dissect what serverless architecture is.

Technology managers find themselves between a rock and a hard place, forced to choose between focusing on technical depth or leadership excellence. A potential solution comes from an unlikely source.

A workplace dynamic I’ve always found fascinating is the instinctual need for people to size up the technical depth of a technology leader upon first introduction. The hands-on technologists in the room want to determine if the manager understands what they do on a day-to-day basis. The non-technical people want to asses if she’ll be able to communicate clearly, or if she speaks in technical gibberish.

This social dynamic is a natural side effect of the dual nature of the senior technology leadership role. On the one hand, technology managers must create and operate code and infrastructure, which requires detailed, technical knowledge. On the other hand, they must translate technical concepts into business strategy and manage a team, which requires communication and leadership skills.

The challenge for senior technology leaders is that we can’t do both perfectly. Therefore, the goal of the CTO and other senior technology leaders is to strike the right balance between technical depth and business leadership, based on the size and focus of the company. However, this is easier said that done.

Neural Networks form the foundation for Deep Learning, the technique AlphaGo used with Reinforcement Learning (RL) to beat a Go master. In this article, we’ll explain how the basics of neural networks work.

The focus of this series is to dissect the methods used by DeepMind to develop AlphaGo, the machine learning program
that shocked the world by defeating a worldwide Go master. By peeking under the hood of DeepMind’s algorithm, we hope
to demystify Machine Learning (ML) and help people understand that ML is merely a computational tool, not a dark art
destined to bring about the robot apocalypse. In the earlier articles we discussed why AlphaGo’s victory represents a
breakthrough, and we explained the
concepts and
algorithms behind reinforcement learning—a key component of DeepMind’s
program. In this article, we’ll explore artificial neural networks. Neural networks form the foundation of deep learning,
the technique that enabled DeepMind’s reinforcement learning algorithm to solve extremely large and complex problems like Go.
Deep learning is an advanced form an artificial neural network. So, before we dive into deep learning in the next article,
we’ll first explore how a neural network operates.

Reinforcement Learning (RL) is the driving algorithm behind AlphaGo, the machine the beat a Go master. In this article, we explore how the components of an RL system come together in an algorithm that is able to learn.

Our goal in this series is to gain a better understanding of how DeepMind constructed a learning machine — AlphaGo — that was
able beat a worldwide Go master. In the first article,
we discussed why AlphaGo’s victory represents a breakthrough in
computer science. In the the second article, we attempted to demystify machine learning (ML) in general, and reinforcement
learning (RL) in particular, by providing a 10,000-foot view of traditional ML and unpacking the main components of an RL system.
We discussed how RL agents operate in a flowchart-like world represented by a Markov Decision Process (MDP), and how they seek to
optimize their decisions by determining which action in any given state yields the most cumulative future reward. We also defined
two important functions, the state-value function (represented mathematically as V) and the action-value function (represented as Q),
that RL agents use to guide their actions. In this article, we’ll put all the pieces together to explain how a self-learning algorithm works.

Reinforcement Learning (RL) is at the heart of DeepMind’s Go playing machine. In the second article in this series, we’ll explain what RL is, and why it represents a break from mainstream machine learning.

In the first article in this series, we discussed why AlphaGo’s victory over world champ Lee Sedol in Go represented a major breakthrough for machine learning (ML). In this is article, we’ll dissect how reinforcement learning (RL) works. RL is one of the main components used in DeepMind’s AlphaGo program.

Reinforcement Learning Overview

Reinforcement learning is a subset of machine learning that has its roots in computer science techniques established in the mid-1950s. Although it has evolved significantly over the years, reinforcement learning hasn’t received as much attention as other types of ML until recently. To understand why RL is unique, it helps to know a bit more about the ML landscape in general.

Most machine learning methods used in business today are predictive in nature. That is, they attempt to understand complex patterns in data — patterns that humans can’t see — in order to predict future outcomes. The term “learning” in this type of machine learning refers to the fact that the more data the algorithm is fed, the better it is at identifying these invisible patterns, and the better it becomes at predicting future outcomes.

Machine learning’s victory in the game of Go is a major milestone in computer science. In the first article in this series, we’ll explain why, and start dissecting the algorithms that made it happen.

In March, an important milestone for machine learning was accomplished: a computer program called AlphaGo beat one of
the best Go players in the world—Lee Sedol—four times in a five-game series. At first blush, this win may not seem all
that significant. After all, machines have been using their growing computing power for years to beat humans at games,
most notably in 1997 when IBM’s Deep Blue beat world champ Garry Kasparov at chess. So why is the AlphaGo victory such a
big deal?

The answer is two-fold. First, Go is a much harder problem for computers to solve than other games due to the massive
number of possible board configurations. Backgammon has 1020 different board configurations, Chess has 1043 and Go has
a whopping 10170 configurations. 10170 is an insanely large number—too big for humans to truly comprehend. The best analogy
used to describe 10170 is that it is larger than the number of atoms in the universe. The reason that the magnitude of 10170
is so important is because it implies that if machine learning (ML) can perform better than the best humans for a large problem
like Go, then ML can solve a new set of real-world problems that are far more complex than previously thought possible. This means
that the potential that machine learning will impact our day-to-day lives in the near future just got a lot bigger.

Proximity technology alone won’t transform retail—it must be used to address customer need in the digital age.

Proximity technology is a class of emerging technologies (which includes iBeacon, NFC, RFID and a host of others) that enable marketers to pinpoint the location of a customer at a particular point in time. Although proximity technology holds vast potential for marketers, it raises some legitimate concerns as well. Probably the most famous (or infamous) example of the dark side of proximity marketing was in the movie “Minority Report,” which depicted a world where people are under constant surveillance, allowing governments and businesses to track people continuously via retina scanners. In this futuristic landscape, digital billboards identify customers as they pass by and speak to them with highly personalized marketing messages: “Hello Mr. Yakimoto, welcome back to the Gap. How did those tank tops work out for you?”

Fortunately for us, ubiquitous, government-controlled retina scanners don’t exist in the real world. But, an even more powerful and pervasive tracking device does — the smartphone. When paired with proximity technology, the smartphone provides all the computational horsepower necessary to create sci-fi-inspired personalized marketing experiences, experiences that truly add value for the customer rather than creating a dystopian landscape. So if that’s the case, why hasn’t proximity technology transformed retail?

Being a great technologist requires very different skills than being a great technology leader. The key to making the transition is adopting the right mindset.

Technical managers are often promoted to their positions of leadership by rising through the ranks—more so than most other disciplines. This is a practical move considering that business decisions today increasingly hinge on the nuanced details of underlying technology. Technology leaders need to assess technical options, align recommendations with business requirements and communicate these decisions to non-technical stakeholders. If technology managers don’t understand the technology at a detailed level, it’s difficult for them to make the right call.

The challenge is that being a great engineer doesn’t automatically translate into being a great leader. Leadership—technical or otherwise—is not something one is born with; it is a skill that is developed over a lifetime. Unfortunately, many companies don’t have management training programs in place to cultivate leaders as they move up the org chart. And for those that do, these trainings are typically generic and conceptual. General management training is an important first step, but it is insufficient by itself to prepare technology leaders for the tactical challenges that await them on a day-to-day basis in their new role.