Deep Learning Obstacles: What’s the Lesson?

Historically, Google hasn’t often shared its powerful arsenal of hardware and software technologies, which has helped build its reputation and market dominance over the years. Google has been one of the symbolic forces behind advanced data transmission technologies used across the globe. Yet, Google maintained its strategic strength over its competitors by carefully withholding older technologies till it moved on to newer technologies. Google was not known for open sourcing until recently, with it’s newest Deep Learning project.

When the tech giant began its Brain Project in 2011, it had a number of stumbles with its research work involving neural networks. This fascinating area of research began in the 1960s and 70s before peaking in the 80s, and then suddenly disappeared post early 90s. It has only recently seen resurgence, with many companies working in Machine Learning, Artificial Intelligence, and with various Deep Learning projects. Google is putting resources into this work and its push to share with others is changing the way Deep Learning is evolving.
Deep Learning Lessons at Google

Research related to any area of Artificial Intelligence and Deep Learning at Google suffered due to the following drawbacks in the early years of research:

Issue Number 1: One obvious reason was the computational limitations of machines those days for training large models.

Issue Number 2: The second and more severe reason was the absence of inspiring data sets.

Issue Number 3: The third and most overpowering reason was that research and engineering groups in Google rarely worked together and shared technologies among them.

The most valuable takeaway from these early problems in neural network projects was that Deep Learning, by its inherent cross-disciplinary application capabilities, required cross-disciplinary knowledge groups and experts to work together and share valuable knowledge. The building block of Deep learning can be widely applied across speech, text, images, labels, or audio to achieve similar results.

Google’s Dramatic Change in Business Philosophy

When Wired.com announced that Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine in November 2015, the global Data Science community was in a state of shock. Google made a dramatic u-turn from its traditional business philosophy and decided to offer TensorFlow, an Artificial Intelligence platform free of cost. So why did Google suddenly decide to change its strategy and open source TensorFlow? The strongest reason, according to a Google insider, was the need to promote the growth of Machine Learning among the data researchers and the Data Scientist communities. Also, Deep Learning originated in academic campuses, where researchers are known to freely share ideas. Many of these academic brains later joined Google, such as Geoff Hinton, a renowned professor at the University of Toronto, who is also acclaimed as the Godfather of Deep Learning. The open source movement in neural networks and Deep Learning gave a considerable push to the research endeavor in the past decade—thus encouraging enterprises like Google to embrace open source.

Deep Learning Platforms at Google: From DistBelief to TensorFlow

TensorFlow’s predecessor, DistBelief had components too closely tied to internal Google architecture that prevented the company from sharing code with outsiders. Although DistBelief held much promise for visual recognition in 2014, it was much slower compared to TensorFlow. Additionally, TensorFlow’s AI engine offers higher flexibility through bundled codes that could be integrated into any third-party application for accomplishing predefined tasks like speech or image recognition, language translation, or voice analysis. In TensorFlow, Machine Learning developers can feed data to built-in software libraries to deliver results through learning. Although the internal software in TensorFlow has been developed with C++, programmers can optionally use Python to develop applications. The future of the Deep Learning community is that TensorFlow will promote the use of other languages like Java or JavaScript for application developments.

According to Jeff Dean, Fellow in Google’s Systems Infrastructure Group, a remarkable difference between DistBelief and TensorFlow is that TensorFlow is also suitable for “reinforcement learning” and “logistic regression,” which DistBelief was not. TensorFlow may eventually run on various types of devices.

The Current Status of Deep Learning at Google

Now, superior computational powers and Big Data enabled data sets have created a storm in Deep Learning at Google. The significant step forward that Google took after learning from its past research mistakes is that the enterprise moved its research operations to the mainstream corridors of cross-functional teams. Now project teams from Gmail, photos, and Android collaborate and solve problems together, thus making the Deep Learning research activity a truly democratic engagement. By the time Techcrunch published this press release in March 2016, thousands of Machine Learning developers and Deep Learning researchers had already found out that TensorFlow was making it rather easy to use ML techniques in power features, such as Smart Reply in Inbox. Review the article Google Releases New TensorFlow Update for more information.

The above lessons learned by Google can be embraced by other companies across the globe.

Google’s Efforts to Teach Deep Learning

Google has not stopped at just offering a Deep Learning research platform. The enterprise has gone an extra mile to educate the Data Science community about Deep Learning. The Venture Beat press release titled Google Launches a Deep Learning Course on Udacity announced in January of this year that Google was bringing a Deep Learning course to the open source platform of Udacity. The phenomenal success of this launch was evident when GitHub drew 4,000 users in a few weeks. This project was starred over 16,000 times by worldwide enthusiasts.

Vincent Vanhoucke, principal scientist and technical lead at Google, has been vested with the responsibility of teaching the course. The Udacity course will also cover TensorFlow, and many types of neural networks. The course is offered free like the TensorFlow software.

The article Google Launches Free Course on Deep Learning reports that the learning systems in TensorFlow will expose the students to many problems that were once considered unsolvable. The course design aims to challenge human intelligence by enabling students to solve these same problems effortlessly through advanced Deep Learning techniques.

TensorFlow Disappoints Some

On the flip side of raving press releases and features articles, one can find posts like Google TensorFlow Deep Learning Disappoints, written by someone who considers himself a student of Machine Learning and an ardent fan of Google. The author of this KDNugget post observes that TensorFlow was aimed at democratizing Deep Learning, but has it delivered? Although the well-designed tutorials aided performance and delivered the promised results, the feeling of “been here, done that” was missing.

Google’s Benefit from Sharing Knowledge

Google firmly believes that TensorFlow and the free Deep Learning courseware on Udacity will help accelerate the Artificial Intelligence wave that was started and stopped so many times before. As the right knowledge is freely available and accessible to the global Data Science community now, hopefully the data researchers and scientists throughout the world will improve Google’s technology and return the benefits back to the company. Recently, many reputed brands like Microsoft, Facebook, or Twitter have made much progress in Artificial Intelligence, and have offered open-source platforms similar to TensorFlow. However, the Deep Learning movement at Google is several years ahead of all of them.

About the author

Paramita Ghosh has over two and a half decades of business writing experience, much of which has been writing for technology and business domains. She has written extensively for a broad range of industries, including but not limited to data management and data technologies. Paramita has also contributed to blended learning projects. She received her M.A. degree in English Literature in 1984 from Jadavpur University in India, and embarked on her career in the United States in 1989 after completing professional coursework. Having ghostwritten and authored hundreds of articles, blog posts, white papers, case studies, marketing content, and learning modules, Paramita has included authorship of one or two books on the business of business writing as part of her post-retirement projects. She thinks her professional strength is “lifelong learning.”