Archive for the ‘tensor flow’ Category

Couldn’t help but notice these two in-the-same-orbit headlines from Amazon and Google re: their own AI chips.

First, in The Information, it’s being reported that Amazon is developing a chip designed for AI to work on the Echo and other hardware powered by Alexa.

They report that the chip should allow Alexa-powered devices to respond more quickly to commands, by allowing more data processing to be handled on the device than in the cloud.

It seems the cloud’s edge is moving back towards the center.

And at Google, according to a post in the Google Cloud Platform blog, the company’s cloud Tensor Processing Units (TPUs) are available in beta to help machine learning experts train and run their ML models more quickly.

Some speeds and feeds deets:

Cloud TPUs are a family of Google-designed hardware accelerators that are optimized to speed up and scale up specific ML workloads programmed with TensorFlow. Built with four custom ASICs, each Cloud TPU packs up to 180 teraflops of floating-point performance and 64 GB of high-bandwidth memory onto a single board. These boards can be used alone or connected together via an ultra-fast, dedicated network to form multi-petaflop ML supercomputers that we call “TPU pods.” We will offer these larger supercomputers on GCP later this year.

IBM has announced that its PowerAI distribution for popular open source Machine Learning and Deep Learning frameworks on the POWER8 architecture now supports the TensorFlow 0.12 framework that was originally created by Google.

TensorFlow support through IBM PowerAI provides enterprises with another option for fast, flexible, and production-ready tools and support for developing advanced machine learning products and systems.

As one of the fastest growing fields of machine learning, deep learning makes it possible to process enormous datasets with millions or even billions of elements and extract useful predictive models. Deep learning is transforming the businesses of leading consumer Web and mobile application companies, and it is quickly being adopted by more traditional business enterprises as well.

IBM developed PowerAI, enterprise distribution and support for open-source machine and deep learning frameworks used to build cognitive applications. PowerAI helps reduce the complexity and risk of deploying these open source frameworks for enterprises on the Power architecture.

PowerAI is tuned for performance. It offers enterprise support on IBM Power Systems S822LC for HPC platforms used by thousands of developers in commercial, academic and hyperscale systems environments. These Power systems are built with IBM’s POWER8 with NVIDIA NVLink processor that is linked via the high-speed NVLink interface to NVIDIA’s Tesla Pascal P100 GPU accelerators. The CPU to GPU and GPU to GPU NVLink connections give a performance boost to deep learning and analytics applications.

In addition, deep learning and other machine learning techniques are being deployed across a wide range of industry sectors including banking, the automotive industry and retail.

IBM also added the Chainer deep learning framework to the latest release of PowerAI.