As Artificial Intelligence (AI) and Machine Learning (ML) get set to take a giant leap in improving day-to-day life, the key is to democratise these new-age tools for all and benefit the communities of developers, users and enterprise customers, a top Google executive said in Gurugram on Wednesday.

Google, a pioneer in AI, has been focusing on four key components – computing, algorithms, data and expertise — to organise all the data and make it accessible.

Google as a company has always been at the forefront of computing AI,” Fei-Fei Li, Chief Scientist of Google Cloud AI and ML, told reporters during a media interaction in Gurugram.

Earlier this year, Google announced the second-generation Tensor Processing Units (TPUs) (now called the Cloud TPU) at the annual Google I/O event in the US.

“We announced the Cloud TPU — the second-generation of our processing unit and our intention is to make it available via Google Cloud,” the top executive added.

As Artificial Intelligence (AI) and Machine Learning (ML) get set to take a giant leap in improving day-to-day life, the key is to democratise these new-age tools for all and benefit the communities of developers, users and enterprise customers, a top Google executive said in Gurugram on Wednesday. The concept of AI and ML came into existence long back but with the vast availability of data today, sectors like healthcare, banking and retail are adopting the technologies at a faster pace than before. Continue reading “Top Google executive says the company wants to democratise AI and ML to improve day-to-day life”

On Thursday, Facebook announced that all of its user translation services—those little magic tricks that happen when you click “see translation” beneath a post or comment—are now powered by neural networks, which are a form of artificial intelligence.

Back in May, the company’s artificial intelligence division, called Facebook AI Research, announced that they had developed a kind of neural network called a CNN (that stands for convolutional neural network, not the news organization where Wolf Blitzer works) that was a fast, accurate translator.

Now, Facebook says that they have incorporated that CNN tech into their translation system, as well as another type of neural network, called an RNN (the R is for recurrent).

Facebook says that the new AI-powered translation is 11 percent more accurate than the old-school approach, which is what they call a “phrase-based machine translation” technique that wasn’t powered by neural networks.

As an example of the difference between the two translation systems, Facebook demonstrated how the old approach would have translated a sentence from Turkish into English, and then showed how the new AI-powered system would do it.

The company’s conversational speech recognition system has finally reached an error rate of only 5.1 percent, putting it on par with the accuracy of professional human transcribers for the first time ever.

A year ago, the Microsoft’s speech and dialog research group refined its system to reach a 5.9 percent word error rate.

This was generally considered to be the average human error rate, but further work by other researchers suggested that 5.1 percent was closer to the mark for humans professionally transcribing speech heard in a conversation.

For over 20 years, a collection of recorded phone conversations known as Switchboard has been used to test speech recognition system for accuracy.

To reduce the system’s error rate by about 12 percent from last year’s benchmark results, the team incorporated a series of improvements into its neural net-based acoustic and language models.

The process involved using a deep-learning framework called Caffe, and feeding it data-sets with images representative of different tattoo styles.

Once the initial training session was complete, the AI could identify the style of a tattoo with pretty impressive accuracy.

While the AI isn’t implemented into the app yet (they’re still feeding it data) they intend to finish training the AI then go forward from there.

AI will help us to classify the remaining 250k pictures… Classification is really important for us because, based on it, we can show users personalized feeds depending on what styles they like, what artists they follow, what those artists are specialized in, etc.

Without AI to sort images, a person has to view each one, decide what style it represents, tag the image, and then create hashtags so that other users can find it.