Sometimes, the easiest way to wrap your head around new information is to see it. Today at I/O, we announced features in Google Search and Google Lens that use the camera, computer vision and augmented reality (AR) to overlay information and content onto your physical surroundings -- to help you get things done throughout your day.

AR in Google Search

With new AR features in Search rolling out later this month, you can view and interact with 3D objects right from Search and place them directly into your own space, giving you a sense of scale and detail. For example, it’s one thing to read that a great white shark can be 18 feet long. It’s another to see it up close in relation to the things around you. So when you search for select animals, you’ll get an option right in the Knowledge Panel to view them in 3D and AR.

Bring the great white shark from Search to your own surroundings.

We’re also working with partners like NASA, New Balance, Samsung, Target, Visible Body, Volvo, Wayfair and more to surface their own content in Search. So whether you’re studying human anatomy in school or shopping for a pair of sneakers, you’ll be able to interact with 3D models and put them into the real world, right from Search.

Search for “muscle flexion” and see an animated model from Visible Body.

New features in Google Lens

People have already asked Google Lens more than a billion questions about things they see. Lens taps into machine learning (ML), computer vision and tens of billions of facts in the Knowledge Graph to answer these questions. Now, we’re evolving Lens to provide more visual answers to visual questions.

Say you’re at a restaurant, figuring out what to order. Lens can automatically highlight which dishes are popular--right on the physical menu. When you tap on a dish, you can see what it actually looks like and what people are saying about it, thanks to photos and reviews from Google Maps.

Google Lens helps you decide what to order

To pull this off, Lens first has to identify all the dishes on the menu, looking for things like the font, style, size and color to differentiate dishes from descriptions. Next, it matches the dish names with the relevant photos and reviews for that restaurant in Google Maps.

Google Lens translates the text and puts it right on top of the original words

Lens can be particularly helpful when you’re in an unfamiliar place and you don’t know the language. Now, you can point your camera at text and Lens will automatically detect the language and overlay the translation right on top of the original words, in more than 100 languages.

We're also working on other ways to connect helpful digital information to things in the physical world. For example, at the de Young Museum in San Francisco, you can use Lens to see hidden stories about the paintings, directly from the museum’s curators beginning next month. Or if you see a dish you’d like to cook in an upcoming issue of Bon Appetit magazine, you’ll be able to point your camera at a recipe and have the page come to life and show you exactly how to make it.

See a recipe in Bon Appetit come to life with Google Lens

Bringing Lens to Google Go

More than 800 million adults worldwide struggle to read things like bus schedules or bank forms. So we asked ourselves: “What if we used the camera to help people who struggle with reading?”

When you point your camera at text, Lens can now read it out loud to you. It highlights the words as they are spoken, so you can follow along and understand the full context of what you see. You can also tap on a specific word to search for it and learn its definition. This feature is launching first in Google Go, our Search app for first-time smartphone users. Lens in Google Go is just over 100KB and works on phones that cost less than $50.

All these features in Google Search and Google Lens provide visual information to help you explore the world and get things done throughout your day by putting information and answers where they are most helpful—right on the world in front of you.