Intel Doubles Down on AI & Machine Learning

Diane Bryant, executive vice president and general manager of the Data Center Group at Intel

This week, Intel Corporation announced a range of new products, technologies and investments from the edge to the data center to help expand and accelerate the growth of artificial intelligence (AI). Intel sees AI transforming the way businesses operate and how people engage with the world. Intel is assembling the broadest set of technology options to drive AI capabilities in everything from smart factories and drones to sports, fraud detection and autonomous cars.

At an industry gathering led by Intel CEO Brian Krzanich, Intel shared how both the promise and complexities of AI require an extensive set of leading technologies to choose from and an ecosystem that can scale beyond early adopters. As algorithms become complex and required data sets grow, Krzanich said Intel has the assets and know-how required to drive this computing transformation.

In a blog Krzanich said: “Intel is uniquely capable of enabling and accelerating the promise of AI. Intel is committed to AI and is making major investments in technology and developer resources to advance AI for business and society.”

Intel’s Robust AI Platform

Intel announced plans to usher in the industry’s most comprehensive portfolio for AI – the Intel Nervana platform. Built for speed and ease of use, the Intel Nervana portfolio is the foundation for highly optimized AI solutions, enabling more data professionals to solve the world’s biggest challenges on industry standard technology.

Intel also provided details of where the breakthrough technology from Nervana will be integrated into the product roadmap. Intel will test first silicon (code-named “Lake Crest”) in the first half of 2017 and will make it available to key customers later in the year. In addition, Intel announced a new product (code-named “Knights Crest”) on the roadmap that tightly integrates best-in-class Intel Xeon processors with the technology from Nervana. Lake Crest is optimized specifically for neural networks to deliver the highest performance for deep learning and offers unprecedented compute density with a high-bandwidth interconnect.

We expect the Intel Nervana platform to produce breakthrough performance and dramatic reductions in the time to train complex neural networks,” said Diane Bryant, executive vice president and general manager of the Data Center Group at Intel. “Before the end of the decade, Intel will deliver a 100-fold increase in performance that will turbocharge the pace of innovation in the emerging deep learning space.”

Bryant also announced that Intel expects the next generation of Intel Xeon Phi processors (code-named “Knights Mill”) will deliver up to 4x better performance1 than the previous generation for deep learning and will be available in 2017. In addition, Intel announced it is shipping a preliminary version of the next generation of Intel Xeon processors (code-named “Skylake”) to select cloud service providers. With AVX-512, an integrated acceleration advancement, these Intel Xeon processors will significantly boost the performance of inference for machine learning workloads. Additional capabilities and configurations will be available when the platform family launches in mid-2017 to meet the full breadth of customer segments and requirements.

Enabling AI Everywhere and Cloud Alliance with Google*

Aside from silicon, Intel highlighted other AI assets, including Intel Saffron Technology, a leading solution for customers looking for business insights. The Saffron Technology platform leverages memory-based reasoning techniques and transparent analysis of heterogeneous data. This technology is also particularly well-suited to small devices, making intelligent local analytics possible across IoT and helping advance state-of the-art collaborative AI.

To simplify deployment everywhere, Intel also delivers common, intelligent APIs that extend across Intel’s distributed portfolio of processors from edge to cloud, as well as embedded technologies such as Intel RealSense cameras and Movidius* vision processing units (VPUs).

Intel and Google announced a strategic alliance to help enterprise IT deliver an open, flexible and secure multi-cloud infrastructure for their businesses. The collaboration includes technology integrations focused on Kubernetes* (containers), machine learning, security and IoT.

To further AI research and strategy, Intel announced the formation of the Intel Nervana AI board, which will feature leading industry and academic thought leaders. Intel announced four founding members: Yoshua Bengio (University of Montreal), Bruno Olshausen (UC Berkeley), Jan Rabaey (UC Berkeley) and Ron Dror (Stanford University).

Additionally, Intel is working to make AI truly accessible. To help accomplish this, Intel has introduced the Intel Nervana AI Academy for broad developer access to training and tools. Intel also introduced the Intel Nervana Graph Compiler to accelerate deep learning frameworks on Intel silicon.

In conjunction with the AI Academy, Intel announced a partnership with global leading education provider Coursera* to provide a series of AI online courses to the academic community. Intel also launched a Kaggle Competition (coming in January) jointly with Mobile ODT* where the academic community can put their AI skills to the test to solve real-world socioeconomic problems, such as early detection for cervical cancer in developing countries through the use of AI for soft tissue imaging.

Intel can offer crucial technologies to drive the AI revolution, but ultimately we must work together as an industry – and as a society – to achieve the ultimate potential of AI,” said Doug Fisher, senior vice president and general manager of the Software and Services Group at Intel.

With the addition of the new edge and data center products, as well as the enablement programs, Intel has the full complement of technologies and ecosystem reach required to deliver the scale and promise of AI for everyone.

AI for the Betterment of Society

Lastly, Intel showcased some of the initiatives the company is investing in and partnering with to help maximize the positive impact of AI on the world. They include:

Intel is committing $25 million to the Broad Institute* to drive high-performance computing for genomics analytics. Through a five-year collaboration, researchers and software engineers at the Intel-Broad Center for Genomic Data Engineering will build, optimize and widely share new tools and infrastructure that will help scientists integrate and process genomic data. The project aims to optimize best practices in hardware and software for genome analytics to make it possible to access and use research data sets that reside on private, public and hybrid clouds.

Intel is a founding partner of Hack Harassment*, a cooperative effort with the mission of reducing the prevalence and severity of online harassment. The initiative is evaluating AI technology as a tool in this effort and is working to develop an intelligent algorithm to detect and deter online harassment. Over time, this capability will be released as an open source API that can be used in a variety of applications.

Intel is also a key partner of the National Center for Missing & Exploited Children* (NCMEC), a nonprofit whose mission is to help find missing children, reduce child sexual exploitation and prevent child victimization. Intel is providing AI technology and advising the center with the goal of accelerating the critical work of NCMEC’s analysts to respond to reports of child sexual exploitation.

Resource Links:

Latest Video

Industry Perspectives

In this special guest post, Axel Huebl looks at the TOP500 and HPCG with an eye on power efficiency trends to watch on the road to Exascale. "This post will focus one efficiency, in terms of performance per Watt, simply because system power envelope is a major constrain for upcoming Exascale systems. With the great numbers from TOP500, we try to extend theoretical estimates from theoretical Flop/Ws of individual compute hardware to system scale." [Read More...]

White Papers

Over the last decade, the new anytime, anywhere, personalized experience has driven query and transaction volumes up 10 to 1000x. It has created 50x more data about customers, products, and interactions. It has also shrunk the response times customers expect from days or hours to seconds or less. Download the new report from GridGain to learn how in-memory computing and in-memory data grids are tackling today’s data storage challenges.