What if a store had no cashiers or cash registers? What if at the entrance, there were turnstiles and you had to scan your smartphone upon entering the store? What if you didn’t need cash, credit cards or checks? That’s what the new Amazon Go stores are all about! Photo Source: The Verge

You check in with your Amazon Prime Account at the turnstile using your existing Amazon credentials to log in to the Amazon Go app on your smartphone. When you finish shopping, the receipt for what you bought is sent to the app once you leave the store.

Camera and sensor detect when you walk into the store, as well as when you remove something from the shelf. Then the system automatically adds it to your Amazon account, using overhead cameras that work with weight sensors in the shelves to precisely track the items you pick up and take with you. When you leave the store, the Amazon Go’s systems automatically debits your account for the items and sends the receipt to the app. The system also knows when you pick up an item and put it back, ensuring that Amazon doesn’t dock you for something you picked to say, check the label, but not actually purchase. Photo Source: Geekwire

Amazon Go’s vice president of technology, Dilip Kumar, believes they are pushing the boundaries of computer vision and machine learning to create an “effortless experience for customers.” Amazon has been developing the “Just Walk Out” technology for ~five years. Amazon Go is part of a larger effort by Amazon into physical retail. This including its Amazon Books stores, Amazon Fresh Pickup locations and Whole Foods Market. The idea of the Amazon Go stores is that people are in a hurry and don’t want to wait in long lines. Hence the idea of “Just walk out.”

And if you were wondering if Amazon is linking Amazon Go with Amazon.com online, the answer is not yet. So if you pull an item off a shelf and replace it because it wasn’t what you wanted, Amazon isn’t currently showing you an ad for a related product the next time you’re online. Could be something they could do in the future? Perhaps. But there is a reason you put it back on the shelf. And that reason may not be apparent and thus may not warrant remarketing it to you. This Amazon Go Store, at 2131 7th Ave. in Seattle, and is open from 7 a.m. to 9 p.m. Monday-Friday.

Amazon needs a second headquarters. Why? Because it’s bursting at the seams in, Seattle. This is where Jeff Bezos founded the company in 1994. Amazon has transformed the city, employing more than 40,000. The downside is that expansion also contributed to Seattle’s soaring cost of living and its traffic woes.

And where might the new headquarter of Amazon be? Fourteen of the 20 were in the Eastern time zone. The shortlist of cities included: Atlanta, Austin, Texas, Boston, Chicago, Columbus, Ohio, Dallas, Denver, Indianapolis, Los Angeles, Miami, Montgomery County, Md., Nashville, Newark, New York, Northern Virginia, Philadelphia, Pittsburgh, Raleigh, N.C., Toronto and Washington according to the New York Times. This list was narrowed to these 20 cities from the original 238 cities that sent proposals with unique offers and incentives. Amazon plans to invest nearly $5 Billion dollars and create up to 50,000 jobs for it’s newest hub.

This made many cities desire to be among those chosen. Amazon asked contenders to tell them detailed information, including tax incentives available to offset its costs for building and operating its second headquarters, potential building sites, crime and traffic stats, nearby recreational opportunities, proximity to an airport, public transit, diverse demographics, connectivity and good, local schools with that could provide great potential employees.

The firm, Sperling BestPlaces, has good track record with prior picks, getting 15 of its top 20 picks that made Amazon’s short list, which included the top 11. According to Tech Crunch, the new Amazon Headquarters will likely bring $100,000 to the winning market over the next 10 to 15 years. In addition, they predict the chosen city will also benefit by a trickle-down effect as Amazon employees may start their own companies in the future, spurring more new growth for the city.

The competition has entered part of into pop culture, as SNL did an Amazon HQ2-themed skit. Alexa announced the hopeful with representatives from those cities who came bearing gifts for CEO Jeff Bezos:

Cities that were picked received a short note from Amazon, letting them know Amazon wants to continue to learn more about the city’s community, talent, and potential real estate options. Amazon said as they reviewed all the proposals, they learned about many new communities across North America that they may consider in the future. The selection process, according to the New York Times included economists, human resources managers and executives who oversee real estate.

Wondering whether you should invest in AI and Machine Learning? That’s a question that the most innovative companies are considering. Why consider it? One good reason is because your competitors have already started. If that doesn’t give you some reason to get motivated, I hope you get started before you are put out of business. To make sure that doesn’t happen, there are a few things to consider to help you start to explore an investment in machine learning.

It’s the Data, Stupid

Of course, as with any business initiative, you’ll want to create value. And this can be done using machine learning systems. But for those systems to provide value, companies will need to begin by evaluating their organization’s data maturity, but more importantly their readiness to accomplish its data-driven goals. Company’s need to start with an audit of their data warehousing, data scientific research capabilities, data governance and data hygiene. In addition, it’s important to look at the sources, uses, volume, and veracity of all your date, meaning your first-, second-, and third-party data.

Garbage in, Garbage Out

Why is making sure your data so clean? Machine learning is basically taking a computer and making it smart enough to learn from the data it’s fed. We are essentially programming machines to learn. The goal is that after a certain point of time, the computer is able to predict further data. How so? Let’s pretend you want to make your computer predict the weather. So to begin, you might feed the computer weather reports of every hour of every over the past year. What you might end up with is– because the temperature (z) depends on day of the year (x) as well as the time of the day (y), more than two-dimensional curve. In fact, weather is random, so the equation generated by the computer won’t just have 3 variables (x, y, z), it may also have higher powers. So depending on the number of factors in a prediction and the randomness of the outcome, the complexity of the curve can increasingly get more complicated.

So back to the data… And I know you know the story about data: garbage in, garbage out. So hopefully, now you see can why good, clean data is so important to prediction. As the computer is taking the data you feed it to make future predictions, those predictions dependent on the data you are feeding it. So you want the very best data possible. And it takes super computers which are capable of handling large volumes of data, as well as the ability to learn fast and to make fast decisions based on the learning it under goes.

AI and ML Are Not The Same

Often times Artificial Intelligence (AI) and Machine Learning (ML) are used interchangeably. But they are actually different. Artificial Intelligence is the broader concept of machines being able to carry out tasks in a way that we would consider “smart.” Machine Learning is the application of AI based on the idea that we should be able to give machines access to data and let them learn for themselves. Artificial Intelligence devices (devices designed to act intelligently), are often classified into one of two groups: 1) applied and 2) general.

Applied AI is far more common. Applied AI is about systems designed to intelligently trade stocks and shares or drive an autonomous vehicle. Generalized AI is may up of systems or devices that, in theory, can handle any task. And are less common. However, this is where some of the most exciting advancements are happening today.

Deep Learning is A New Area of Machine Learning Research

It was introduced with the objective of moving Machine Learning closer to one of its original goals: that of being Artificial Intelligence. So essentially Deep Learning is a subfield of machine learningconcerned with the algorithms inspired by the structure and function of the brain called artificial neural networks. Deep learning has worked it’s way into business language via Artificial Intelligence (AI), Big Data and analytics. Deep learning is an approach to AI which shows great promise when it comes to developing the autonomous, self-teaching systems which are revolutionizing many industries.

The Two Big Ideas: It May Be Possible To Teach Computers to Learn and The Internet is a Source of a Ton of Data

Arthur Samuel, in 1959 is credited as the one who came up with the big idea that it might be possible to teach computers to learn for themselves. That would be in contrast to teaching computers everything they need to know about the world and how to carry out tasks. The second big idea was that the Internet, with huge increase in the amount of digital information being generated, stored and could be used for analysis. So the scientists and engineers realized it would be far more efficient to code computers to think like human beings, and then plug them into the internet to give them access to all of the information in the world.

Neural Networks Are Algorithms

Neural networks are a set of algorithms, modeled loosely after the human brain and designed to recognize patterns. The development of neural networks has been key to teaching computers to think and understand the world in the way we do, in addition to the innate advantages they hold over people such as speed, accuracy and lack of bias. So a Neural Network is a computer system that classifies information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain. It works on a system of probability – which means that based on data it’s fed, it is able to make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables “learning” – by sensing or being told whether its decisions are right or wrong and then can modify the approach it takes in the future.

What Can Machine Learning Applications Do?

Machine Learning applications can read text and work out whether the person who wrote it is making a complaint or offering congratulations. They can also listen to a piece of music, decide whether it is likely to make someone happy or sad, and find other pieces of music to match the mood. They can even compose their own music expressing the same themes, or which they know is likely to be appreciated by the admirers of the original piece.

These are all possibilities offered by systems based around ML and neural networks. The idea is that we should be able to communicate and interact with electronic devices and digital information, as naturally as we would with another human being. And another field of AI – Natural Language Processing (NLP) – has become an exciting area of innovation in recent years, and one which is heavily reliant on machine learning. (And yes, my initials just happen to be NLP, but that doesn’t really mean anything… just a happy coincidence…)

Where is Used?

Take Google for instance. Google is using it in its voice and image recognition algorithms. It is also used by Netflix and Amazon to decide what you want to watch or buy next. And it is also being by researchers at MIT to predict the future. While Machine Learning is often described as a sub-discipline of AI, we might look at Machine Learning as the state-of-the-art of AI. Why? Perhaps because it is showing the greatest promise to provide tools that industry and society can use to drive change.

More on the practical uses of AI and ML in the future. For now, noodle on that!

Alexa Voice Service (AVS) is the software that allows owners to control compatible devices with their voice. From the various reports it was estimated there were 700–1,100 Alexa-controllable products at CES. And the Amazon / Alexa logo was everywhere at CES.

Is the Age of George Jetson here? In a smart home, everything from the the HVAC to the TV to window shades can be controlled. However it’s not easy to really have a whole house of Artificial Intelligence (AI) controlled devices. Why? Many of the IoT-enabled devices don’t talk to other devices if they are made by different manufacturers. Opps! The IoT world awaits THE killer app, like Apple Homekit or Google Home. We are still waiting for them to provide all encompassing, unified smart “home.”

The Amazon Echo is a hands-free speaker controlled with your voice. It connects to the Alexa Voice Service to provide information, news, play music, report on sports scores, deliver weather reports… The uses for AVS and Alexa are limited only by your imagination.

When something is connected to Alexa, the device instantly becomes pseudo-interoperable. Interoperable technology is not an evolutionarily stable strategy for most IoT manufacturers. Interoperability is the ability of different information technologysystems and software applications to communicate, exchange data, and use the information that has been exchanged to do something.

What CES showed us is that voice control seems to be the unifying app for IoT. And Alexa is the biggest name in voice control. Smart devices are generally controlled with apps. If there is an app to control the smart device, the app allows AVS to directly control the smart device. So you could say, “Alexa, tell Crestron I’d like to turn the lights on in the bedroom” (for your Crestron) or “Alexa, I would like to turn the heat on the downstairs thermostat to 70 degrees” (for your Iris Smart Home System). It’s easy to see the value of voice control in so many ordinary situations. What’s interesting about AVS is that even though Crestron and Iris have nothing to do with one another, you can control them both with your voice.

Alexa has finely tuned automatic speech recognition (ASR) and natural language understanding (NLU) engines that recognize and respond to voice requests instantly. Alexa is always getting smarter with new capabilities and services through machine learning, regular API updates, feature launches, and custom skills from the Alexa Skills Kit (ASK.) The AVS API is a programming language agnostic service that makes it easy to integrate Alexa into your devices, services, and applications. And it’s free.

And you can create meaningful user experiences for an endless variety of use cases with Alexa Voice Service (AVS); Amazon’s intelligent voice recognition and natural language understanding service. AVS includes a full range of features, including smart home control, streaming music content, news, timers… and can be added to any connected device that has a microphone and speaker.

But while Alexa has a head start, Google Home, an Echo competitor, is very likely to quickly catch up. Google Home though, works with a completely different set of protocols and has different “awake” words. These are command words that make it pay attention and carry out the request. It seems that we may need to learn to speak to different systems in different ways – perhaps we’ll need lessons in Alexa speak and Google speak as well as and Siri and Cortana speak!

So is the Age of George Jetson here yet? Sort of. What will be interesting is to see if there is a start-up that will pull all of this together so that us regular humans don’t need to become AI experts to connect and use the technology.

Customer Experience (CX): IOT Platforms are the platforms that make IoT come to life. The Constellation ShortList presents vendors in different categories of the market relevant to early adopters. In addition, products included in this document meet the threshold criteria for this category as determined by Constellation Research. This Constellation ShortList of vendors for a market category is compiled through conversations with early adopter clients, independent analysis, and briefings with vendors and partners.

Developing products for the Internet of Things (IoT) is a complex endeavor. Because most organizations lack the resources and skills for custom app development, successful projects require development on an IoT platform or solution. IoT data solutions offer a place to start by combining many of the tools needed to manage a deployment from device management to data prediction and insights, all into one offering. For customer experience offerings, Constellation has identified a range of platform providers, including pure-play third-party platforms, hardware vendors, connectivity providers and system integrators. Having an end-to-end ecosystem strategy means a company doesn’t have to develop their own modules, network stack or cloud on-boarding.

Constellation considers the following criteria to be considered an IoT Platform for CX. One of the key characteristics is to be able to connect data from every device, sensor, website, etc. and is built on a scalable event processing engine designed to ingest and analyze billions of connected events:

Many companies approach the internet of things by starting with a device, make it connectable and then are in search of a business use case. This is a typical process that happens when there is a new area of technology area. If a company uses that as a strategy, it can be the long road to #IoT innovation. What businesses need to ask themselves are, “What business outcomes are they looking for and what innovations could be possible to shift their business model?”

We heard from the @OracleIOT group several business scenarios:

Break / Fix it – which drives a predictive prescriptive business process

Static Analytics – which drives the use of real-time, big-data analytics

Ownership – which drives as-a-service business models and

Central Service– which drives self-service as well as self-guided service.

What they are finding is that there are various phases a business often goes through when deploying IOT. It can start with the devices or assets (trucks, phones, factories, etc…) which are then connected to a platform which are connected to a network. For a business to actually make use of IOT, the first phase, Phase 1 can be about Connecting Assets for situations like remote monitoring and asset tracking. Phase 2 is can be using Predictive Analytics which means designing predictive algorithms to transform decisions into proactive instead of reactive decisions and improving products and processes. Phase 3 can be about Service Excellence. This is where the customer or employee experience is affected. It is where IOT is being used to transform business processes by blending IOT into enterprise applications like ERP, SCM, Customer Support, CRM, HCM…

Some of Oracle’s IOT applications are in the areas of:

Asset monitoring for the utilization, availability and data from connected sensors

Production monitoring and prognostics of the equipment on the manufacturing factory floor

Fleet Management for business who have fleets of trucks, buses, delivery and maintenance vehicles

Connected worker for the tracking of employees, for instance in the mining, engineering construction industries.

Here are some examples of clients applying IOT to their businesses:

VINCI is building the next generation sensor-driven building automation to reduce the number of “truck rolls” which has a huge ROI. They are doing this with the integration or Oracle Service Cloud and SAP. Lochbridge is creating connected fleets where IOT and big data is being used for predictive maintenance in monitoring fleet / cargo to reduce the response time. GEMU is using real-time filtering and processing of valve events and proactive parts replacement with the integration of CRM, IOT and a service ticketing system. And SoftBank is using IOT to deliver mobility-as-a-service where they are monitoring vehicle location for billing and geo-fencing.

As the world of IOT expands and more and more companies start to see the value in connecting enterprise applications, with devices, and networks, we will see the transformation of workers, employees and customer experiences. When those experiences are transformed, the real value and ROI of the connected enterprise will come to life.

The need for customer experience to improve is not a myth. In fact, here’s why. Noted psychology researcher and writer Mihaly Csíkszentmihályi observed in 1998 that people who perform seamless, sequence-based activities on a regular basis are happier than people who don’t[i]. He coined the term “flow” to describe this behavior. With the advent of CoIT, we’ve actually imposed a new set of demands on our customer’s brains. But instead of offering a series of smoothly sequential flows, websites and mobile applications are characterized by lag, downtime, and restarts. And at the same time customer’s flow-oriented brains simply aren’t wired to deal with poor digital experience interactions. Science has shown the business need for great customer experiences is a fact, not a myth.

And it can be tempting to label customers picky and impatient. But there’s a wealth of research on what happens to customers at a neurological level when they are forced to deal with slow or interrupted processes.[i] Their impatience is an indelible part of their human circuitry. Brands must recognize that customers’ hardwiring of the brain’s and their neurological desire for flow and easy of use as part of the cost of doing business. Companies must come to terms with the economic imperative of the customer experience or drive customers to their competitors because of their poor focus on customer experiences.

Fast websites and mobile experience create happier users. Those happier users are more likely to follow “calls to action” to register, download, subscribe, request information, or purchase. Unhappy users, which could include those who experience a mere two-second slowdown in how a web page loads, make almost two percent fewer queries, three point seven-five percent click less often, and report being significantly less satisfied with their overall experience[i]. Worse, they tell their friends about their negative experience. With the word-of-mouth social networks provide, brands need to heed the seriousness of differentiating their brand’s customer experience or be left in the dust.

Response Times have been consistent for 45 years. Based on neuroscience, the facts about human perception and response times have been consistent for more than forty-five years[i]. In fact, these numbers are hard-wired in human brains. And they are consistent regardless of the type of device, application, or connection a customer is using. In fact, that’s key to where customer expectations come from thus important to capitalize on. And what’s critical is determining where a brand’ web / mobile sites compare to customer expectations as well as benchmarking against CoIT applications or competitors or even non-competitors who have a great customer experience.

Response Time Has Not changed Much. In Robert B. Miller’s 1968 paper, “Response Time in Man-Computer Conversational Transactions[ii]“, found people have always been most comfortable, most efficient and most productive with response times of less than two seconds. Since 2006, what has changed slightly is the average online shopper expects pages to load in four seconds or less. Today, forty-nine percent expect page load times of two seconds or less, and eighteen percent expect pages to load instantly[iii]. And while optimizing every aspect of a brand’s digital assets to meet an “instant” expectation is a laudable goal, organizations simply may not have initially budgeted the resources to achieve these goals. Digital experience maturity, however, provides teams the ability to identify the interaction points in the digital customer journey most sensitive to improvement so they can maximize return on performance investment and include this in the budget and resource planning activities. Here’s the results of the Walmart study on page load times and conversion rates:

Businesses can keep arguing that customer experience doesn’t matter, it’s a touchy-feely construct or get it directly affects the bottom-line and start by designing and measuring customer experience performance management. For more on this see my report, here.

Obviously no one plans on implementing a project that will fail. However, statistics show that over the past 20 years a very large percentage of technology projects do fail to result in the business outcomes that they were expected to meet. The real issue is that leading change (implementing new technology, whether it be CX, transitioning to the cloud, IoT, etc…) is different than the role of leading in general. But this point is often overlooked or some leaders don’t realize how big a difference there is in leading change compared to their every day leadership job.

The reasons projects often fail and the need for orchestrating customer experience projects using organizational change management range from:

Projects ran over budget, were late, or never completed.

Projects were attempted more than once because initial efforts failed.

Only a small part of the organization adopted the new processes or systems.

When the project went live, critical business systems halted, causing loss of revenue, increased costs, dissatisfied customers and frustrated employees.

Parts of the business (or possibly the entire organization) eventually reverted to the old way of doing things.

The return on investment (ROI) and/or stated benefits were never realized.

The project cost the business more money than it saved or generated.

Our research shows that there are seven steps for leaders of change leaders can use to be more successful.

Practice #1 – Understand the Business Case for Change

Practice #2 – Start with the Executive Team: Move It from Involved to Engaged

Practice #3 – Engage All Leaders and Prepare Them for the Journey

Practice #4 – Build a Broad Understanding of the Change Process

Practice #5 – Evaluate and Tailor the Change Effort

Practice #6 – Develop Adaptive Leadership Skills in Change Leaders

Practice #7 – Create Change Leadership Plans

Don’t become one of the statistics of failed projects. There are best practices that work.

There’s been a lot of talk around self-driving cars and Local Motors, a leading vehicle technology integrator and creator of the world’s first 3D-printed cars, introduced the first self-driving vehicle to integrate the advanced cognitive computing capabilities of IBM Watson. Local Motors is a technology company that designs, builds and sells vehicles. The Local Motors platform is a combination of a global co-creation with local micro-manufacturing to bring hardware innovations quickly to market. Local Motors in National Harbor, Maryland is a public place where co-creation is the focus for advancement of vehicle technologies.

What can you see if you visit the Maryland facility? On display are 3D-printed cars and a large-scale 3D printer. There visitors can have an interactive co-creative experience that showcases what the future of 3D printing, sustainability, autonomous technology will be. Visitors can get involved with Local Motors engineers and the company’s co-creation community.

The automobile has a name and it’s called “Olli.” At its debut it was carrying the CEO of Local Motors and co-founder John B. Rogers, Jr. and vehicle designer Edgar Sarmiento. The vehicle took them from the Local Motors co-creation community into the new facility. While there are already self-driving action in Washington, DC, soon there will be vehicles on the road in Miami-Dade County and Las Vegas. The cars can carry up to 12 people. More details can be seen in this video:

What’s the Big Innovation? The electric vehicle is equipped with some of the world’s most advanced vehicle technology, including IBM Watson Internet of Things(IoT) for Automotive. Passengers can interact conversationally with Olli and ask about:

Destinations, for example, “Olli, can you take me downtown?”

Specific vehicle functions like: “How does this feature work?”

Time related questions like, “Are we there yet?”

In addition, Olli can make recommendations on local restaurants or historical sites. Olli is essentially designed to deliver interesting, entertaining, intuitive and interactive experiences for riders. How is IBM Watson is being used to improve the passenger experience? It is enabling the natural interaction with the vehicle via the cloud-based cognitive computing capability of IBM Watson IoTto analyze and learn from high volumes of transportation data produced by more than 30 sensors embedded throughout the vehicle. As the vehicle gets used, Local Motors plans to install more sensors and adjust them continuously as passenger needs and local preferences are identified.

The platform leverages four Watson developer APIs:

Speech to Text

Natural Language Classifier

Entity Extraction and

Text to Speech.

Harriet Green, General Manager, IBM Watson Internet of Things, Commerce & Education commented that, “Cognitive computing provides incredible opportunities to create unparalleled, customized experiences for customers, taking advantage of the massive amounts of streaming data from all devices connected to the Internet of Things, including an automobile’s myriad sensors and systems. IBM is excited to work with Local Motors to infuse IBM Watson IoT cognitive computing capabilities into Olli, exploring the art of what’s possible in a world of self-driving vehicles and providing a unique, personalized experience for every passenger while helping to revolutionize the future of transportation for years to come.”

Having worked in the automotive industry in Detroit, it’s exciting to see new develops like this. It’s also exciting to see the application of cognitive computing in a real world situation. Using it for something like empowering self-driving vehicle is probably the best way to advance not only the self-driving cars but also the ability to deploy cognitive computing in a real world application. This looks to be the start of something very interesting that other brands in this space should be taking note of. Competition in the automotive is rapidly changing, from the provision of Cars-As-A-Service, with GM investing $500M in Lyft to cars that drive themselves. The Future is here.

If you are wondering what I have been up to lately, I thought I would put all the research I have published into one place. Here’s a list of Dr. Natalie’s completed and published research and soon to be published content! It ranges from IOT, Analytics, Big Data, Customer Experience, Leadership, Organizational Change Management, Storytelling, Collaboration, Digital Transformation, Social Selling, Social Media, the Cloud, Marketing, Sales, SaaS, IaaS, PaaS, DaaS, AI, Machine Learning, Innovation, Social Networks, Social Media Monitoring, Mobile, Customer Service and Customer Success Management….and a few things in-between…

IOT (The Internet of Things), Innovation, AI, Machine Learning, Analytics and the Cloud

Get A Free Dr. Natalie Report on Social Customer Experience

Dr. Natalie is a business strategist and a futurist. She has spent her careers looking about how businesses interact with their customers and their employees and she provides companies with the best way to create environments that foster loyatly, motivation and innovation.