Tag Archives: RankBrain

Based on a plethora of recent media on artificial intelligence (AI), not only are there a lot of people working on it, but many of those on the leading edge are also concerned with what they don’t understand in the midst of their ominous new creation.

Amazon, DeepMind/Google, Facebook, IBM, and Microsoft teamed up to form the Partnership on AI.(1) Their charter talks about sharing information and being responsible. It includes all of the proper buzz words for a technosocial contract.

“…transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and of the trustworthiness, reliability, containment, safety, and robustness of the technology.”(2)

They are not alone in this concern, as the EU(3) is also working on AI guidelines and a set of rules on robotics.

Some of what makes them all a bit nervous is the way AI learns, the complexity of neural networks and inability to go back and see how the AI arrived at its conclusion. In other words, how do we know that its recommendation is the right one? Adding to that list is the discovery that AIs working together can create their own languages; languages we don’t speak or understand. In one case, at Facebook, researchers saw this happening and stopped it.

For me, it’s a little disconcerting that Facebook, a social media app is one of those corporations leading the charge and leading the research into AI. That’s a broad topic for another blog, but their underlying objective is to market to you. That’s how they make their money.

To be fair, that is at least part of the motivation for Amazon, DeepMind/Google, IBM, and Microsoft, as well. The better they know you, the more stuff they can sell you. Of course, there are also enormous benefits to medical research as well. Such advantages are almost always what these companies talk about first. AI will save your life, cure cancer and prevent crime.

So, it is somewhat encouraging to see that these companies on the forefront of AI breakthroughs are also acutely aware of how AI could go terribly wrong. Hence we see wording from the Partnership on AI, like

The key word here is benevolent. But the clear objective is to keep the dialog positive, and

“Create and support opportunities for AI researchers and key stakeholders, including people in technology, law, policy, government, civil liberties, and the greater public, to communicate directly and openly with each other about relevant issues to AI and its influences on people and society.”(2)

I’m reading between the lines, but it seems like the issue of how AI will influence people and society is more of an obligatory statement intended to demonstrate compassionate concern. It’s coming from the people who see huge commercial benefit from the public being at ease with the coming onslaught of AI intrusion.

In their long list of goals, the “influences” on society don’t seem to be a priority. For example, should they discover that particular AI has a detrimental effect on people, that their civil liberties are less secure, would they stop? Probably not.

At the rate that these companies are racing toward AI superiority, the unintended consequences for our society are not a high priority. While these groups are making sure that AI does not decide to kill us, I wonder if they are also looking at how the AI will change us and are those changes a good thing?

Just to keep you up to speed, everything is on schedule or ahead of schedule.

In the race toward a superintelligence or ubiquitous AI. If you read this blog or you are paying attention at any level, then you know the fundamentals of AI. But for those of you who don’t here are the basics. Artificial Intelligence comes from processing and analyzing data. Big data. Then programmers feed a gazillion linked-up computers (CPUs) with algorithms that can sort this data and make predictions. This process is what is at work when the Google search engine makes suggestions concerning what you are about to key into the search field. These are called predictive algorithms. If you want to look at pictures of cats, then someone has to task the CPUs with learning what a cat looks like as opposed to a hamster, then scour the Internet for pictures of cats and deliver them to your search. The process of teaching the machine what a cat looks like is called machine learning. There is also an algorithm that watches your online behavior. That’s why, after checking out sunglasses online, you start to see a plethora of ads for sunglasses on just about every page you visit. Similar algorithms can predict where you will drive to today, and when you are likely to return home. There is AI that knows your exercise habits and a ton of other physiological data about you, especially when you’re sharing your Fitbit or other wearable data with the Cloud. Insurance companies extremely interested in this data, so that it can give discounts to “healthy” people and penalize the not so healthy. Someday they might also monitor other “behaviors” that they deem to be not in your best interests (or theirs). Someday, especially if we have a “single-payer” health care system (aka government healthcare), this data may be required before you are insured. Before we go too far into the dark side (which is vast and deep), AI can also search all the cells in your body and identify which ones are dangerous, and target them for elimination. AI can analyze a whole host of things that humans could overlook. It can put together predictions that could save your life.

Googles chips stacked up and ready to go. Photo from WIRED.

Now, with all that AI background behind us, this past week something called Google I/O went down. WIRED calls it Google’s annual State-of-the-Union address. There, Sundar Pichai unveiled something called TPU 2.0 or Cloud TPU. This is something of a breakthrough, because, in the past, the AI process that I just described, even though lighting fast and almost transparent, required all those CPUs, a ton of space (server farms), and gobs of electricity. Now, Google (and others) are packing this processing into chips. These are proprietary to Google. According to WIRED,

“This new processor is a unique creation designed to both train and execute deep neural networks—machine learning systems behind the rapid evolution of everything from image and speech recognition to automated translation to robotics…

…says Chris Nicholson, the CEO, and founder of a deep learning startup called Skymind. “Google is trying to do something better than Amazon—and I hope it really is better. That will mean the whole market will start moving faster.”

Funny, I was just thinking that the market is not moving fast enough. I can hardly wait until we have a Skymind.

“Along those lines, Google has already said that it will offer free access to researchers willing to share their research with the world at large. That’s good for the world’s AI researchers. And it’s good for Google.”

Is it good for us?

Note:This sets up another discussion (in 3 weeks) about a rather absurd opinion piece in WIRED about why we should have an AI as President. These things start out as absurd, but sometimes don’t stay that way.

I want to make a Tshirt. On the front, it will say, “7 years is a long time.” On the back, it will say, “Pay attention!”

What am I talking about? I’ll start with some background. This semester, I am teaching a collaborative studio with designers from visual communications, interior design, and industrial design. Our topic is Humane Technologies, and we are examining the effects of an Augmented Reality (AR) system that could be ubiquitous in 7 years. The process began with an immersive scan of the available information and emerging advances in AR, VR, IoT, human augmentation (HA) and, of course, AI. In my opinion, these are a just a few of the most transformative technologies currently attracting the heaviest investment across the globe. And where the money goes there goes the most rapid advancement.

A conversation starter.

One of the biggest challenges for the collaborative studio class (myself included) is to think seven years out. Although we read Kurzweil’s Law of Accelerating Returns, our natural tendency is to think linearly, not exponentially. One of my favorite Kurzweil illustrations is this:

“Exponentials are quite seductive because they start out sub-linear. We sequenced one ten-thousandth of the human genome in 1990 and two ten-thousandths in 1991. Halfway through the genome project, 7 ½ years into it, we had sequenced 1 percent. People said, “This is a failure. Seven years, 1 percent. It’s going to take 700 years, just like we said.” Seven years later it was done, because 1 percent is only seven doublings from 100 percent — and it had been doubling every year. We don’t think in these exponential terms. And that exponential growth has continued since the end of the genome project. These technologies are now thousands of times more powerful than they were 13 years ago, when the genome project was completed.”1

So when I hear a policymaker, say, “We’re a long way from that,” I cringe. We’re not a long way away from that. The iPhone was introduced on June 29, 2007, not quite ten years ago. The ripple-effects from that little technological marvel are hard to catalog. With the smartphone, we have transformed everything from social and behavioral issues to privacy and safety. As my students examine the next possible phase of our thirst for the latest and greatest, AR (and it’s potential for smartphone-like ubiquity), I want them to ask the questions that relate to supporting systems, along with the social and ethical repercussions of these transformations. At the end of it all, I hope that they will walk away with an appreciation for paying attention to what we make and why. For example, why would we make a machine that would take away our job? Why would we build a superintelligence? More often than not, I fear the answer is because we can.

Our focus on the technologies mentioned above is just a start. There are more than these, and we shouldn’t forget things like precise genetic engineering techniques such as CRISPR/Cas9 Gene Editing, neuromorphic technologies such as microprocessors configured like brains, the digital genome that could be the key to disease eradication, machine learning, and robotics.

Though they may sound innocuous by themselves, they each have gigantic implications for disruptions to society. The wild card in all of these is how they converge with each other and the results that no one anticipated. One such mutation would be when autonomous weapons systems (AI + robotics + machine learning) converge with an aggregation of social media activity to predict, isolate and eliminate a viral uprising.

From recent articles and research by the Department of Defense, this is no longer theoretical; we are actively pursuing it. I’ll talk more about that next week. Until then, pay attention.

It’s easier for us to let the data decide for us. At least that is the idea behind global digital design agency Huge. Aaron Shapiro is the CEO. He says, “The next big breakthrough in design and technology will be the creation of products, services, and experiences that eliminate the needless choices from our lives and make ones on our behalf, freeing us up for the ones we really care about: Anticipatory design.”

Buckminster Fuller wrote about Anticipatory Design Science, but this is not that. Trust me. Shapiro’s version is about allowing big data, by way of artificial intelligence and neural networks, to become so familiar with us and our preferences that it anticipates what we need to do next. In this vision, I don’t have to decide what to wear, or eat, or how to get to work, or when to buy groceries, or gasoline, what color trousers go with my shoes and also when it’s time to buy new shoes. No decisions will be necessary. Interestingly, Shapiro sees this as a good thing. The idea comes from a flurry of activity about something called decision fatigue. What is that? In a nutshell, it says that our decision-making capacity is a reservoir that gradually gets depleted the more decisions we make, possibly as a result of body chemistry. After a long string of decisions, according to the theory, we are more likely to make a bad decision or none at all. Things like willpower disintegrate along with our decision-making.

Among the many articles in the last few months on this topic was FastCompany, who wrote that,

“Anticipatory design is fundamentally different: decisions are made and executed on behalf of the user. The goal is not to help the user make a decision, but to create an ecosystem where a decision is never made—it happens automatically and without user input. The design goal becomes one where we eliminate as many steps as possible and find ways to use data, prior behaviors and business logic to have things happen automatically, or as close to automatic as we can get.”

Supposedly this frees “us up for the ones we really care about.”
My questions are, who decides which questions are important? And once we are freed from making decisions, will we even know that we have missed on that we really care about?

“Google Now is a digital assistant that not only responds to a user’s requests and questions, but predicts wants and needs based on search history. Pulling flight information from emails, meeting times from calendars and providing recommendations of where to eat and what to do based on past preferences and current location, the user simply has to open the app for their information to compile.”

It’s easy to forget that AI as we currently know it goes under the name of Facebook or Google or Apple or Amazon. We tend to think of AI as some ghostly future figure or a bank of servers, or an autonomous robot. It reminds me a bit of my previous post about Nick Bostrom and the development of SuperIntelligence. Perhaps it is a bit like an episode of Person of Interest. As we think about designing systems that think for us and decide what is best for us, it might be a good idea to think about what it might be like to no longer think—as long as we still can.

Over the past weeks, I have begun to look at the design profession and design education in new ways. It is hard to argue with the idea that all design is future-based. Everything we design is destined for some point beyond now where the thing or space, the communication, or the service will exist. If it already existed, we wouldn’t need to design it. So design is all about the future. For most of the 20th century and the last 16 years, the lion’s share of our work as designers has focused primarily on very near-term, very narrow solutions: A better tool, a more efficient space, a more useful user interface or satisfying experience. In fact, the tighter the constraints, the narrower the problem statement and greater the opportunity to apply design thinking to resolve it in an elegant and hopefully aesthetically or emotionally pleasing way. Such challenges are especially gratifying for the seasoned professional as they have developed almost an intuitive eye toward framing these dilemmas from which novel and efficient solutions result. Hence, over the course of years or even decades, the designer amasses a sort of micro scale, big data assemblage of prior experiences that help him or her reframe problems and construct—alone or with a team—satisfactory methodologies and practices to solve them.

Coincidentally, this process of gaining experience is exactly the idea behind machine learning and artificial intelligence. But, since computers can amass knowledge from analyzing millions of experiences and judgments it is theoretically possible that an artificial intelligence could gain this “intuitive eye” to a degree far surpassing the capacity of an individual him-or-her designer.

That is the idea behind a brash (and annoyingly self-conscious) article from the American Institute of Graphic Arts (AIGA) entitled Automation Threatens To Make Graphic Designers Obsolete. Titles like this are a hook. Of course. Designers, deep down assume that they can never be replaced. They believe this because inherent to the core of artificial intelligence there is a lack of understanding, empathy or emotional verve, so far. We saw this earlier in 2016 when an AI chatbot went Nazi because a bunch of social media hooligans realized that Tay (the name of the Microsoft chatbot) was in learn mode. If you told “her” Nazi’s were cool, she believed you. It was proof, again, that junk in is junk out.

The AIGA author Rob Peart pointed to AutoDesk’s Dreamcatcher software that is capable of rapid prototyping surprisingly creative albeit roughly detailed prototypes. Peart features a quote from an Executive Creative Director for techno-ad-agency Sapient Nitro. “A designer’s role will evolve to that of directing, selecting, and fine tuning, rather than making. The craft will be in having vision and skill in selecting initial machine-made concepts and pushing them further, rather than making from scratch. Designers will become conductors, rather than musicians.”

I like the way we always position new technology in the best possible light. “You’re not going to lose your job. Your job is just going to change.” But tell that to the people who used to write commercial music, for example. The Internet has become a vast clearing house for every possible genre of music. It’s all available for a pittance of what it would have taken a musician to write, arrange and produce a custom piece of music. It’s called stock. There are stock photographs, stock logos, stock book templates, stock music, stock house plans, and the list goes on. All of these have caused a significant disruption to old methods of commerce, and some would say that these stock versions of everything lack the kind of polish and ingenuity that used to distinguish artistic endeavors. The artist’s who’s jobs they have obliterated refer to the work with a four-letter word.

Now, I confess I have used stock photography, and stock music, but I have also used a lot of custom photography and custom music as well. Still, I can’t imagine crossing the line to a stock logo or stock publication design. Perish the thought! Why? Because they look like four-letter-words; homogenized, templates, and the world does not need more blah. It’s likely that we also introduced these new forms of stock commerce in the best possible light, as great democratizing innovations that would enable everyone to afford music, or art or design. That anyone can make, create or borrow the things that professionals used to do.

As artificial intelligence becomes better at composing music, writing blogs and creating iterative designs (which it already does and will continue to improve), we should perhaps prepare for the day when we are no longer musicians or composers but rather simply listeners and watchers.

But let’s put that in the best possible light: Think of how much time we’ll have to think deep thoughts.

Bostrom is concerned about the day when machine intelligence exceeds human intelligence (the guess is somewhere between twenty and thirty years from now). He points out that, “Once there is super-intelligence, the fate of humanity may depend on what the super-intelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing [designing] than we are, and they’ll be doing so on digital timescales.”

His concern is legitimate. How do we control something that is smarter than we are? Anticipating AI will require more strenuous design thinking than that which produces the next viral game, app, or service. But these applications are where the lion’s share of the money is going. When it comes to keeping us from being at best irrelevant or at worst an impediment to AI, Bostrom is guardedly optimistic about how we can approach it. He thinks we could, “[…]create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of.”

At the crux of his argument and mine: “Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.”

Beyond machine learning (which has many facets), there are a wide-ranging set of technologies, from genetic engineering to drone surveillance, to next-generation robotics, and even VR, that could be racing forward without someone thinking about this “additional challenge.”

This could be an excellent opportunity for designers. But, to do that, we will have to broaden our scope to engage with science, engineering, and politics. More on that in future blogs.

If you’ve been a follower of this blog for a while, then you know that I am something of a privacy wonk. I’ve written about it before (about a dozen times, such as), and I’ve even built a research project (that you can enact yourself) around it. A couple of things transpired this week to remind me that privacy is tenuous. (It could also be all the back episodes of Person of Interest that I’ve been watching lately or David Staley’s post last April about the Future of Privacy.) First, I received an email from a friend this week alerting me to a little presumption that my software is spying on me. I’m old enough to remember when you purchased software on as a set of CDs (or even disks). You loaded in on your computer, and it seemed to last for years before you needed to upgrade. Let’s face it, most of us use only a small subset of the features in our favorite applications. I remember using Photoshop 5 for quite awhile before upgrading and the same with the rest of what is now called the Adobe Creative Suite. I still use the primary features of Photoshop 5, Illustrator 10 and InDesign (ver. whatever), 90% of the time. In my opinion, the add-ons to those apps have just slowed things down, and of course, the expense has skyrocketed. Gone are the days when you could upgrade your software every couple of years. Now you have to subscribe at a clip of about $300 a year for the Adobe Creative Suite. Apparently, the old business model was not profitable enough. But then came the Adobe Creative Cloud. (Sound of an angelic chorus.) Now it takes my laptop about 8 minutes to load into the cloud and boot up my software. Plus, it stores stuff. I don’t need it to store stuff for me. I have backup drives and archive software to do that for me.

That means it tracks your keystrokes. Possibly this only occurs when you are using that particular app, but uh-uh, no thanks. Switch that off. Next up is a more pernicious option; it’s called, Machine Learning. Hmm. We all know what that is and I’ver written about that before, too. Just do a search. Here, Adobe says,

“Adobe uses machine learning technologies, such as content analysis and pattern recognition, to improve our products and services. If you prefer that Adobe not analyze your files to improve our products and services, you can opt-out of machine learning at any time.”

Hey, Adobe, if you want to know how to improve your products and services, how about you ask me, or better yet, pay me to consult. A deeper dive into ‘machine learning’ tells me more. Here are a couple of quotes:

“Adobe uses machine learning technologies… For example, features such as Content-Aware Fill in Photoshop and facial recognition in Lightroom could be refined using machine learning.”

“For example, we may use pattern recognition on your photographs to identify all images of dogs and auto-tag them for you. If you select one of those photographs and indicate that it does not include a dog, we use that information to get better at identifying images of dogs.”

Facial recognition? Nope. Help me find dog pictures? Thanks, but I think I can find them myself.

I know how this works. The more data that the machine can feed on the better it becomes at learning. I would just rather Adobe get their data by searching it out for it themselves. I’m sure they’ll be okay. (Afterall there’s a few million people who never look at their account settings.) Also, keep in mind, it’s their machine not mine.

The last item on my privacy rant just validated my paranoia. I ran across this picture of Facebook billionaire Mark Zuckerberg hamming it up for his FB page.

In the background, is his personal laptop. Upon closer inspection, we see that Zuck has a piece of duct tape covering his laptop cam and his dual microphones on the side. He knows.

Today I’m on my soapbox, again, as an advocate of design thinking, of which design fiction is part of the toolbox.

In 2014, the Pew Research Center published a report on Digital Life in 2025. Therein, “The report covers experts’ views about advances in artificial intelligence (AI) and robotics, and their impact on jobs and employment.” Their nutshell conclusion was that:

“Experts envision automation and intelligent digital agents permeating vast areas of our work and personal lives by 2025, (9 years from now), but they are divided on whether these advances will displace more jobs than they create.”

On the upside, some of the “experts” believe that we will, as the brilliant humans that we are, invent new kinds of uniquely human work that we can’t replace with AI—a return to an artisanal society with more time for leisure and our loved ones. Some think we will be freed from the tedium of work and find ways to grow in some other “socially beneficial” pastime. Perhaps we will just be chillin’ with our robo-buddy.

On the downside, there are those who believe that not only blue-collar, robotic, jobs will vanish, but also white-collar, thinking jobs, and that will leave a lot of people out of work since there are only so many jobs as clerks at McDonald’s or greeters at Wal-Mart. They think that some of these are the fault of education for not better preparing us for the future.

A few weeks ago I blogged about people who are thinking about addressing these concerns with something called Universal Basic Income (UBI), a $12,000 gift to everyone in the world since everyone will be out of work. I’m guessing (though it wasn’t explicitly stated) that this money would come from all the corporations that are raking in the bucks by employing the AI’s, the robots and the digital agents, but who don’t have anyone on the payroll anymore. The advocates of this idea did not address whether the executives at these companies, presumably still employed, will make more than $12,000, nor whether the advocates themselves were on the 12K list. I guess not. They also did not specify who would buy the services that these corporations were offering if we are all out of work. But I don’t want to repeat that rant here.

I’m not as optimistic about the unique capabilities of humankind to find new, uniquely human jobs in some new, utopian artisanal society. Music, art, and blogs are already being written by AI, by the way. I do agree, however, that we are not educating our future decision-makers to adjust adequately to whatever comes along. The process of innovative design thinking is a huge hedge against technology surprise, but few schools have ever entertained the notion and some have never even heard of it. In some cases, it has been adopted, but as a bastardized hybrid to serve business-as-usual competitive one-upmanship.

I do believe that design, in its newest and most innovative realizations, is the place for these anticipatory discussions and future. What we need is thinking that encompasses a vast array of cross-disciplinary input, including philosophy and religion, because these issues are more than black and white, they are ethically and morally charged, and they are inseparable from our culture—the scaffolding that we as a society use to answer our most existential questions. There is a lot of work to do to survive ourselves.

Writing a weekly blog can be a daunting task especially amid teaching, research and, of course, the ongoing graphic novel. I can only imagine the challenge for those who do it daily. Thank goodness for friends who send me articles. This week the piece comes from The New York Times tech writer Farhad Manjoo. The article is entitled, “A Plan in Case Robots Take the Jobs: Give Everyone a Paycheck.” The topic follows nicely on the heels of last week’s blog about the inevitability of robot-companions. Unfortunately, both the author and the people behind this idea appear to be woefully out of touch with reality.

Here is the premise: After robots and AI have become ubiquitous and mundane, what will we do with ourselves? “How will society function after humanity has been made redundant? Technologists and economists have been grappling with this fear for decades, but in the last few years, one idea has gained widespread interest — including from some of the very technologists who are now building the bot-ruled future,” asks Manjoo.

The answer, strangely enough, seems to be coming from venture capitalists and millionaires like Albert Wenger, who is writing a book on the idea of U.B. I. — universal basic income — and Sam Altman, president of the tech incubator Y Combinator. Apparently, they think that $1,000 a month would be about right, “…about enough to cover housing, food, health care and other basic needs for many Americans.”

This equation, $12,000 per year, possibly works for the desperately poor in rural Mississippi. Perhaps it is intended for some 28-year-old citizen with no family or social life. Of course, there would be no money for that iPhone, or cable service. Such a mythical person has a $300 rent controlled apartment (utilities included), benefits covered by the government, doesn’t own a car, or buy gas or insurance, and then maybe doesn’t eat either. Though these millionaires clearly have no clue about what it costs the average American to eek out a living, they have identified some other fundamental questions:

“When you give everyone free money, what do people do with their time? Do they goof off, or do they try to pursue more meaningful pursuits? Do they become more entrepreneurial? How would U.B.I. affect economic inequality? How would it alter people’s psychology and mood? Do we, as a species, need to be employed to feel fulfilled, or is that merely a legacy of postindustrial capitalism?”

The Times article continues with, “Proponents say these questions will be answered by research, which in turn will prompt political change. For now, they argue the proposal is affordable if we alter tax and welfare policies to pay for it, and if we account for the ways technological progress in health care and energy will reduce the amount necessary to provide a basic cost of living.”

Often, the people that float ideas like this paint them as utopia, but I have a couple of additional questions. Why are venture capitalists interested in this notion? Will they also reduce their income to $1,000 per month? Seriously, that never happens. Instead, we see progressives in government and finance using an equation like this: “One thousand for you. One hundred thousand for me. One thousand for you. One hundred thousand for me…”

Fortunately, it is an unlikely scenario, because it would not move us toward equality but toward a permanent under-class forever dependent on those who have. Scary.

There is an impressive video out this week showing a rather adept robot going through the paces, so to speak, getting smacked down and then getting back up again. The year is 2016, and the technology is striking. The company is Boston Dynamics. One thing you know from reading this blog is that I ascribe to The Law of Accelerating Returns. So, as we watch this video, if we want to be hyper-critical we can see that the bot still needs some shepherding, tentatively handles the bumpy terrain, and is slow to get up after a violent shove to the floor. On the other hand, if you are at all impressed with the achievement, then you know that the people working on this will only make it better, agiler, more svelt, and probably faster on the comeback. Let’s call this ingredient one.

Last year I devoted several blogs to the advancement of AI, about the corporations rushing to be the go-to source for the advanced version of predictive behavior and predictive decision-making. I have also discussed systems like Amazon Echo that use the advanced Alexa Voice Service. It’s something like Siri on steroids. Here’s part of Amazon’s pitch:

“• Hears you from across the room with far-field voice recognition, even while music is playing
• Answers questions, reads audiobooks and the news, reports traffic and weather, gives info on local businesses, provides sports scores and schedules, and more with Alexa, a cloud-based voice service
• Controls lights, switches, and thermostats with compatible WeMo, Philips Hue, Samsung SmartThings, Wink, Insteon, and ecobee smart home devices
•Always getting smarter and adding new features and skills…”

You’re supposed to place it in a central position in the home where it is proficient at picking up your voice from the other room. Just don’t go too far. The problem with Echo is that its stationery. Call Echo ingredient two. What Echo needs is ingredient one.

Some of the biggest players in the AI race right now are Google, Amazon, Apple, Facebook, IBM, Elon Musk, and the military, but the government probably won’t give you a lot of details on that. All of these billion-dollar companies have a vested interest in machine learning and predictive algorithms. Not the least of which is Google. Google already uses Voice Search, “OK, Google.” to enable type-free searching. And their AI algorithm software TensorFlow has been open-sourced. The better that a machine can learn, the more reliable the Google autonomous vehicle will be. Any one of these folks could be ingredient three.

I’m going to try and close the loop on this, at least for today. Guess who bought Boston Dynamics last year? That would be corporation X, formerly known as Google X.