Google Lens Will Use Your Camera To Understand What’s Around You

Today at Google I/O, Google is really pushing its agenda as an AI forward company, from the porting of Google Assistant to the iPhone to the new TPUs that run their Cloud and improvements on their AI powered Google Photos.

So it is no surprise that they decided to announce a new AI powered feature which they call Google Lens. Google Lens is their AI powered camera tech that will try to figure out what you are pointing through the camera. It is like Google Image Search, but instead of uploading a photo, you use the camera to understand your surrounding.

With Google Lens, your smartphone camera won’t just see what you see, but will also understand what you see to help you take action. #io17pic.twitter.com/viOmWFjqk1

Google Lens will reside on Google Photos but Google said that it will be coming over to Google Assistant and later on to all Google Products. The company showed a demo where Google Lens was used to identify a flower, identify your home Wi-Fi network settings (which was pretty cool) and also to identify a business listing shown as a card on the user interface.

In the latter implementation to the Google Assistant, Google Lens will be used in helping translate foreign languages or pulling ticket information from a concert you saw being advertised in the street.

Google Lens starkly reminds you of Google Word Lens that was initially an acquisition made by the company and integrated to Google Translate. It is now used in Google Translate to translate languages in real time via your camera, which is really cool.

Google Lens is also rather similar to what Samsung says is part of Bixby’s arsenal and what Sony has been offering with the Info-eye app for a while now. Better late than never, they always say, and we now wait for it to be officially released by Google.