Microsoft Researchers Predict What’s Coming in AI for the Next Decade

Seventeen Microsoft researchers—all of whom happen to be women this year—have made their calls for what will be hot in the burgeoning realm of artificial intelligence (AI) in the next decade.

Ripping a page out of the IBMibm5 for 5 playbook, Microsoft msft likes to use these annual predictions to showcase the work of its hotshot research brain trust. Some of the picks are already familiar. One is about how advances in deep learning—which endows computers with human-like thought processes—will make computers or other smart devices more intuitive and easier to use. This is something we’ve all heard before, but the work is not done, I guess.

For example, “the search box” most of us use on Google or Bing search engines will disappear, enabling people to search for things based on spoken commands, images, or video, according to Susan Dumais, distinguished scientist and deputy managing director of Microsoft’s Redmond, Wash. research lab. That’s actually already happening with products like Google goog Now, Apple Siri aapl, and Microsoft Cortana—but there’s more to do.

Dumais says the box will go away. She explains:

That is more ubiquitous, embedded and contextually sensitive. We are seeing the beginnings of this transformation with spoken queries, especially in mobile and smart home settings. This trend will accelerate with the ability to issue queries consisting of sound, images or video, and with the use of context to proactively retrieve information related to the current location, content, entities or activities without explicit queries.

Virtual reality will become more ubiquitous as researchers enhance better “body tracking” capabilities, says Mar Gonzalez Franco, a researcher at the Redmond research lab. That will enable such rich, multi-sensorial experiences that could actually cause subjects to hallucinate. That doesn’t sound so great to some, but that capability could help people with disabilities “retrain” their perceptual systems, she notes.

There’s but one mention on this list of the need for ethical or moral guidelines for the use of AI. That comes from Microsoft distinguished scientist Jennifer Chayes.

Chayes, who is also managing director of Microsoft’s New England and New York City research labs, thinks AI can be used to police the ethical application of AI.

Our lives are being enhanced tremendously by artificial intelligence and machine learning algorithms. However, current algorithms often reproduce the discrimination and unfairness in our data and, moreover, are subject to manipulation by the input of misleading data. One of the great algorithmic advances of the next decade will be the development of algorithms which are fair, accountable and much more robust to manipulation.

Microsoft experienced the mis-use of AI’s power first-hand earlier this year when its experimental Tay chatbot offended many Internet users with racist and sexist slurs that the program was taught by others. Microsoft chose to focus on female researchers to stress that, while women and girls make up half of the world’s population, they account for less than 20% of computer science graduates.

For more on women in tech, watch:

This is particularly true for women and girls who comprise 50% of the world’s population, but account for less than 20 percent of computer science graduates, according to the Organization for Economic Cooperation and Development. The fact that the U.S. Bureau of Labor Statistics expects that there will be fewer than 400,000 qualified applicants to take on 1.4 million computing jobs in 2020 means there is great opportunity for women in technology going forward.

Google Artificial Intelligence Whiz Describes Our Sci-Fi Future

The next time you enter a query into Google’s search engine or consult the company’s map service for directions to a movie theater, remember that a big brain is working behind the scenes to provide relevant search results and make sure you don’t get lost while driving.

Well, not a real brain per se, but the Google Brain research team. As Fortune’s Roger Parloff wrote, the Google Brain research team has created over 1,000 so-called deep learning projects that have supercharged many of Google’s products over the past few years like YouTube, translation, and photos. With deep learning, researchers can feed huge amounts of data into software systems called neural nets that learn to recognize patterns within the vast information faster than humans.

In an interview with Fortune, one of Google Brain’s co-founders and leaders, Jeff Dean, talks about cutting-edge AI research, the challenges involved, and using AI in its products. The following, done against the backdrop of the 50th annual Turing Award, an honor in computer science from the Association for Computing Machinery, has been edited for length and clarity.

What are some challenges researchers face with pushing the field of artificial intelligence?

A lot of human learning comes from unsupervised learning where you’re just sort of observing the world around you and understanding how things behave. That’s a very active area of machine-learning research, but it’s not a solved problem to the extent that supervised learning is.

So unsupervised learning refers to how one learns from observation and perception, and if computers could observe and perceive on their own that could help solve more complex problems?

Right, human vision is trained mostly by unsupervised learning. You’re a small child and you observe the world, but occasionally you get a supervised signal where someone would say, “That’s a giraffe” or “That’s a car.” And that’s your natural mental model of the world in response to that small amount of supervised data you got.

We need to use more of a combination of supervised and unsupervised learning. We’re not really there yet, in terms of how most of our machine learning systems work.

Can you explain the AI technique called reinforcement learning?

The idea behind reinforcement learning is you don’t necessarily know the actions you might take, so you explore the sequence of actions you should take by taking one that you think is a good idea and then observing how the world reacts. Like in a board game where you can react to how your opponent plays. Eventually after a whole sequence of these actions you get some sort of reward signal.

Reinforcement learning is the idea of being able to assign credit or blame to all the actions you took along the way while you were getting that reward signal. It’s really effective in some domains today.

I think where reinforcement learning has some challenges is when the action-state you may take is incredibly broad and large. A human operating in the real world might take an incredibly broad set of actions at any given moment. Whereas in a board game there’s a limited set of moves you can take, and the rules of the game constrain things a bit and the reward signal is also much clearer. You either won or lost.

If my goal was to make a cup of coffee or something, there’s a whole bunch of actions I might want to take, and the reward signal is a little less clear.

But you can still break the steps down, right? For instance, while making a cup of coffee, you could learn that you didn’t fully ground the beans before they were brewed—and that it resulted in bad coffee.

Right. I think one of the things about reinforcement learning is that it tends to require exploration. So using it in the context of physical systems is somewhat hard. We are starting to try to use it in robotics. When a robot has to actually take some action, it’s limited to the number of sets of actions it can take in a given day. Whereas in computer simulations, it’s much easier to use a lot of computers and get a million examples.

Is Google incorporating reinforcement learning in the core search product?

The main place we’ve applied reinforcement learning in our core products is through collaboration between DeepMind [the AI startup Google bought in 2014] and our data center operations folks. They used reinforcement learning to set the air conditioning knobs within the data center and to achieve the same, safe cooling operations and operating conditions with much lower power usage. They were able to explore which knob settings make sense and how they reacted when you turn something this way or that way.

Through reinforcement learning they were able to discover knob settings for these 18 or however many knobs that weren’t considered by the people doing that task. People who knew about the system were like, “Oh, that’s a weird setting,” but then it turned out that it worked pretty well.

What makes a task more appropriate for incorporating reinforcement learning?

The data center scenario works well because there are not that many different actions you can take at a time. There’s like 18 knobs, you turn a knob up or down, and you’re there. The outcome is pretty measurable. You have a reward for better power usage assuming you’re operating within the appropriate margins of acceptable temperatures. From that perspective, it’s almost an ideal reinforcement learning problem.

An example of a messier reinforcement learning problem is perhaps trying to use it in what search results should I show. There’s a much broader set of search results I can show in response to different queries, and the reward signal is a little noisy. Like if a user looks at a search result and likes it or doesn’t like it, that’s not that obvious.

How would you even measure if they didn’t like a certain result?

Right. It’s a bit tricky. I think that’s an example of where reinforcement learning is maybe not quite mature enough to really operate in these incredibly unconstrained environments where the reward signals are less crisp.

What are some of the biggest challenges in applying what you’ve learned doing research to actual products people use each day?

One of the things is that a lot of machine learning solutions and research into those solutions can be reused in different domains. For example, we collaborated with our Map team on some research. They wanted to be able to read all the business names and signs that appeared in street images to understand the world better, and know if something’s a pizzeria or whatever.

It turns out that to actually find text in these images, you can train a machine learning model where you give it some example data where people have drawn circles or boxes around the text. You can actually use that to train a model to detect which pixels in the image contain text.

That turns out to be a generally useful capability, and a different part of the Map team is able to reuse that for a satellite-imagery analysis task where they wanted to find roof tops in the U.S. or around the world to estimate the location of solar panel installations on rooftops.

For more about Google, watch:

And then we’ve found that the same kind of model can help us on preliminary work on medical imaging problems. Now you have medical images and you’re trying to find interesting parts of those images that are clinically relevant.

The initiative was first launched back in May for movies and recipes, with search results displaying big, prominent images and a horizontally navigable carousel that grouped some results to make it easier for users. Now, people searching for “the best restaurants in New Orleans” or “online coding courses” will see a similar layout, though providers of the aforementioned services must build the rich cards for their respective sites.

At launch, companies such as TripAdvisor, Time Out, Thrillist, Udacity, and Coursera are on board, though it’s worth noting here that the feature is only open to U.S.-based sites for the moment.

Google googl has been pushing to encourage companies to optimize their websites for mobile devices through a number of programs. Earlier this year, it launched the open-source AMP project, which allows news articles to display more quickly on smartphones when clicked through Google Search. While AMP HTML isn’t required for sites wishing to use rich cards, Google naturally recommends that they do.

“Users consuming AMP’d content will be able to swipe near instantly from restaurant to restaurant or from recipe to recipe within your site,” explained Stacie Chan, of global product partnerships at Google, in a blog post.

Recently, Google revealed that it would eventually use mobile versions of websites to rank search results, while it also plans to punish mobile sites that use interstitials — that is, ads that show up while a web page is loading.

Google revealed that it’s “actively experimenting” as it looks to expand rich cards to more categories around the world.

Google Artificial Intelligence Guru Says A.I. Won’t Kill Jobs

Computers can more easily recognize cats in photos and translate text because of advances in artificial intelligence. But we’re still decades away from that technology replacing humans at work on a large scale, according to a top Google artificial intelligence executive.

Mustafa Suleyman, co-founder of artificial intelligence startup DeepMind, later acquired by Google, said on Monday that has seen no evidence that advances in A.I. technologies are impacting the workforce. Nevertheless, it’s something that people “should definitely pay attention to” as the technologies continue to mature.

Suleyman predicated that humanity is still “many decades away from encountering that sort of labor replacement at scale.” Instead, the technology is best used to help humans with work-related tasks rather than replace them outright.

Suleyman, speaking at an O’Reilly Media technology conference in San Francisco, explained that he co-founded DeepMind in 2010 after working for organizations like the United Nations on climate change issues. Policy researchers were “being overwhelmed by the amount of information we have to navigate the touch problems of climate change,” he said.

DeepMind was created to help solve problems that involve too much data for humans to coherently grasp, he explained. Part of how it helps is through an artificial intelligence technique called deep learning.

Deep learning is a branch of artificial intelligence technologies that involves feeding advanced software systems called neural networks enormous quantities of data. The software can then learn on its own to recognize patterns in the information.

Now at Google, Suleyman explained how the search giant is incorporating deep learning into some of its own technologies. Deep learning, for example, is used to optimize data centers by reducing the amount of energy used to cool servers when they get hot.

Suleyman said that Google has “developed techniques to safely deploy these systems in a controllable way,” countering fears that A.I. systems are left to run on their own accord. But he did not elaborate on what techniques he was talking about.

For more about artificial intelligence, watch:

Google goog always keeps “a human in the loop,” he said, to ensure that the A.I. systems to don’t do something human operators wouldn’t want.

“Humans remain the ultimate controller of the systems,” said Suleyman.

How Google Plans to Own Rio Olympics Coverage

The Alphabet-owned Google GOOGL on Monday announced several new features that will allow users to keep track of everything happening at the 2016 Rio Olympics from the opening ceremony on Friday to the closing ceremony on Aug. 21. By searching on Google, users will be able to get event schedules, medal counts, athlete information, and TV schedules in more than 30 countries without needing to click on search results.

But it’s not just about search. Google says that YouTube is getting in the mix by streaming highlights in more than 60 countries. With Google Maps, users will be able to wander around different parts of Rio and gawk at the Olympic venues. Even Google Trends, which tracks search activity across the service, will list top Rio-related search queries worldwide.

Google has also marketed its mobile app for iOS and Android for receiving notifications about major event and medal wins.

Google’s move is the latest salvo from online companies hoping to capitalize on the Olympics. The event is a highly followed event, and the Internet is a critical source of information about the Olympians, the event, and more.

For its part, Google has decided, not surprisingly, to nab traffic through search. Facebook FB and Twitter TWTR, however, are also competing for attention. For instance, Facebook announced last week that it had partnered with NBC to become the Olympics “social command center” for streaming exclusive Olympics video content through Facebook and Instagram.

For more about the Rio Olympics, watch:

However, Google has the early lead by already displaying event schedules and medal standings when users search for “Rio Olympics,” among other Olympic-related search queries.

Chinese Search Giant Baidu Just Backed This Fintech Company

Baidu is making an investment in a U.S.-based fintech company, the Chinese search juggernaut announced on Monday. Baidu is making an undisclosed investment in ZestFinance, a startup that blends machine learning with big data analysis to pinpoint more accurate credit scores.

This isn’t the first Chinese investment in ZestFinance. E-commerce site JD.com made an undisclosed investment in ZestFinance last year.

Founded by former Googlegoogl veteran Douglas Merrill, ZestFinance launched three years ago to apply big data analyses to credit scoring and help lenders more accurately evaluate prospective borrowers. Instead of determining credit worthiness based on 10 to 15 pieces of data, ZestFinance uses tens of thousands of data points to assess in a matter of seconds a borrower’s ability to pay back loans. To date, ZestFinance has raised $272 million in funding.

China provides a huge opportunity for ZestFinance because there is no centralized credit bureau like in the United States, says Merrill, who previously served as chief information officer and vice president of engineering at Google. ZestFinance has said previously that only 20% of Chinese citizens have credit cards while the rest of the population uses only cash and debit cards to pay for items.

As part of the investment, Baidu will be using ZestFinance’s underwriting technology to determine creditworthiness of its users. For example, if an adult user is searching for video games in the middle of the day, it could be determined that he or she doesn’t have a job and isn’t a student, which is data that can be used towards determining if that user has good credit.

Merrill acknowledges that it’s not a perfect signal, but that in the Chinese market, there isn’t as much financial data being shared as in the U.S. “Nobody has ever proven that it’s possible to turn search data into credit data, and this is exciting,” he posits.

Merrill adds that using search data in the U.S. for credit scores is not as likely. “People have different expectations of privacy here in U.S., plus we have a lot more financial data to evaluate compared to China,” he explains.

For more on China, watch:

Merrill declined to comment specifically on how Baidu is going to be using the credit scores. The company has also been working with JD.com to create credit scores for shoppers who want to apply for a line of credit to buy items from the e-commerce company.

“ZestFinance’s unique ability to analyze and process complex, disparate data to make accurate credit decisions is very valuable to the Chinese credit market, where a centralized credit scoring system has yet to emerge,” wrote Tony Yip, global head of investment, mergers and acquisitions at Baidu in a statement.

A Company You Don’t Expect Could Make $1 Billion On A Yahoo Sale

Yahoo CEO Marissa Mayer may have had a good idea back in 2014. But now it may turn out to be a very bad one.

In 2014, Mayer signed a deal with Mozilla to make Yahoo YHOO the default search engine in Mozilla’s popular Firefox browser. In doing so, Yahoo replaced Google, although Firefox users still had the option to switch to Google by changing the browser’s settings.

For Mayer, the deal was a potentially pivotal one. At the time, Firefox was one of the most popular browsers in the world and accounted for a large chunk of search traffic when she was a Google executive. Yahoo taking over from Google let Mayer get that search traffic and hopefully get more people to use her own service.

Neither Mozilla nor Yahoo, however, have divulged details about their partnership’s success or lack thereof.

But according to tech news site Recode, which obtained a copy of the agreement between Yahoo and Mozilla, there is a clause in the agreement that would allow the browser maker to leave the partnership if Yahoo is acquired. In addition, the new Yahoo owner would need to pay Mozilla $375 million annually through 2019, or about $1 billion, if Yahoo is acquired anytime soon.

Yahoo is currently reviewing bids for its core business, which includes search. While Yahoo has remained silent about those negotiations, it’s believed that several major companies including VerizonVZ submitted offers. Yahoo is expected to start its final bidding round soon and could announce a new owner sometime this summer.

If that happens, Mozilla could then exercise its clause in its Yahoo contract, according to Recode, and collect its $375 million annually. And all Mozilla must do in that scenario is simply say that it doesn’t want to work with Yahoo’s new owner—a quick and easy way to make $1 billion.

That said, Mozilla may not to exercise its clause and therefore remain under contract with any new Yahoo owner. However, sources told Recode that the partnership hasn’t been all that “profitable,” and it might make sense for Mozilla to cut ties with Yahoo, sign a new deal with another company, and keep collecting from Yahoo.

The revelation could have a profound impact on Yahoo and its ultimate selling price. The clause, which Mayer included simply because she didn’t believe Yahoo would actually be sold, according to the report, adds more risk to a Yahoo buyout. It could mean that in addition to the acquisition price a company pays for Yahoo’s core business, which is expected to come in between $5 billion and $8 billion, it would also need to be ready to pay $1 billion over a few years to rid itself of a Mozilla deal.

In an interview with Recode, one person who claims to be in the running to buy Yahoo called the Mozilla payoff “very hairy.” Another buyer told Recode it was “worrisome.”

For more about Yahoo, watch:

For its part, Yahoo has not commented on the clause and did not respond to Fortune‘s request for comment. However, Mozilla chief legal and business officer Denelle Dixon-Thayer hinted that the clause is indeed real, and could be exercised, if need be.

“We are carefully watching this process and we remain closely engaged with Yahoo on this,” Dixon-Thayer said. “Each of our search agreements is the result of a competitive process reflective of the value that Firefox brings to the ecosystem. Naturally, as with any important agreement, it’s critical to consider all foreseeable events to make sure there is protection against downside risk.”

Google Could Face Yet Another Antitrust Complaint

The European Union’s competition watchdog is currently marching towards a third antitrust complaint against Google, claiming its advertising services, including the popular AdWords, violates competition rules, Bloomberg is reporting, citing sources who claim to have knowledge of the plans. The sources didn’t say when the charges could be leveled, though competition commissioner Margrethe Vestager said in May that she hoped her office could come to a conclusion on AdWords “within a reasonable timeframe.”

If Google GOOGL is slapped with an antitrust complaint, it would be the third in the EU, alone. The company last year was charged with unfairly displaying search results and promoting its own shopping services over competing alternatives. In April, Google was again hit with a formal complaint, claiming it was abusing its power with its dominant mobile operating system Android.

In both cases, Google has argued it’s innocent, and must soon answer the formal complaint against Android. In April, the EU said that Android’s prominence in Europe, where it holds a dominant market share lead over all others, allows it to impose “restrictions on Android device manufacturers and mobile network operators” that the officials believe, run afoul of competition rules.

The AdWords investigation is actually quite old. The EU announced that it was investigating AdWords in 2011 to determine whether Google was illegally harming other advertising services and funneling marketers into its own search service to the detriment of others. AdWords has been the focal point of the investigation because it offers a service for marketers and companies to advertise their products and services on Google Search.

AdWords runs on Google Search and matches a person’s query with the keywords a marketer might want to target. Those marketers set a budget on how much to spend and Google collects advertising revenue.

Google has been one of the hottest targets for Margrethe Vestager, who only took the job in 2014. In less than two years, Vestager has taken on Google in two cases and now might be planning a third.

For Google, the stakes are high. Under EU law, if a company is found to have violated antitrust regulations, it could be required to pay up to 10% of its revenue in damages. It’s possible, therefore, that Google could pay more than $7 billion to settle a case.

But that’s just one of many problems Alphabet’s Google is facing. In addition to European scrutiny, a report surfaced in May, saying the the U.S. Federal Trade Commission has also restarted discussions over whether Google has abused its position as the top search engine in the U.S. The agency previously investigated Google on the matter in 2013.

Google declined to comment on the possibility of a third EU antitrust complaint.

5 Questions To Test Your Google IQ

The biggest news came in August when the search giant made the blockbuster decision to transform itself into a holding company called Alphabet. That new corporate umbrella oversees a stable of quasi-independent businesses like Internet-connected device maker Nest, a sci-fi lab known as X that is developing a self-driving car, and, of course, Google, the ubiquitous search engine.

But morphing into Alphabet wasn’t the only big news come from Google’s parent this past year. For example, on two occasions over two months, Alphabet surpassed Appleaapl in market value.

To find out how well you know Google and its siblings under Alphabet, take the following test and see how you score: