Bringing Artificial Intelligence to Life

Artificial Intelligence (AI) may seem like a vision for a distant future, but in truth, AI is all around us as machines are increasingly learning to sense, learn, reason, act and adapt in the real world. This is transforming industries and changing our lives in amazing new ways, by amplifying human capabilities, automating tedious or dangerous tasks, and solving some of our most challenging societal problems. In this article, we’ll discuss the path to AI with Intel technologies. Let’s take a closer look at AI’s primary enabler machine learning as well as its younger sibling deep learning.

Machine Learning

While less than 10 percent of servers worldwide were deployed in support of machine learning last year [1], machine learning is the fastest growing field of AI and a key computational method for expanding the field of AI. At its core, machine learning is the use of computer algorithms to make predictions based on data, allowing machines to act or think without being explicitly directed to perform specific functions. The machine is trained to recognize insightful patterns and connections between complex data, and then score or classify new, incoming data to perform tasks. Currently, it can take weeks to train machine learning models, impeding the ability for models to learn from new data and information in real time. However, with the explosion of data in our smarter and more connected world that the models can learn from, along with increased compute power, machine learning models are becoming significantly more accurate and useful. Today, Intel processors power 97 percent of servers deployed to support machine learning workloads [2].

Deep learning, a branch of machine learning, is a nascent and fast-growing field. Deep learning uses neural networks to comprehend more complex and unstructured data and is delivering breakthroughs in areas like image recognition, speech recognition, natural language processing and other complex tasks. Deep learning emulates neurons and synapses in the brain, learning through iteration and the formation of complex pathways in a neural network. Many of us already benefit from these algorithms, which are used for the facial recognition/tagging feature on social media, voice recognition on our smartphones, semi-autonomous vehicle control, and many more applications.

Intel Technologies for Machine Learning

To learn and act quickly, machine learning requires tremendous computational capability to run complex mathematical algorithms and process huge amounts of data. Reducing the time to train machine models, while also improving how fast they can score data, requires a paradigm shift to distributed computing, using a robust, multi-node cluster infrastructure. Intel offers a consistent programming model and common architecture that can be used across high-performance computing, data analytics and machine learning workloads. Here are some Intel technologies specifically designed for such workloads:

Intel® Xeon Phi™ processor family enables data scientists to train complex machine algorithms faster and to run a wider variety of workloads than GPUs. This next-generation Intel® Xeon™ Phi processor, includes enhancements for high performance machine learning training. It integrates mixed precision performance to reduce deep learning training time and offer high memory bandwidth to increase performance for complex neural data sets.

The Intel® Xeon® processor E5 family is the most widely deployed processor for machine learning inference, with the added flexibility to run a wide variety of data center workloads. Combining it with Altera Arria 10 FPGAs delivers excellent performance per watt and the ability to reconfigure the device to manage various workloads.

The Intel® Scalable System Framework offers comprehensive reference architectures and designs that enhance technology interoperability and reduce deployment complexity, offering a path to broad adoption of distributed deep learning algorithms and significant reduction in time to model.

Nervana Systems*, a recognized leader in deep learning, was acquired by Intel to further advance Intel’s AI portfolio and enhance the deep learning performance and total cost of ownership of Intel Xeon and Intel Xeon Phi processors.

Intel actively works with the open source community and also offers a variety of libraries and APIs to accelerate AI progress and broaden access to powerful tools.

Machine Learning Framework Optimizations

For machine learning, Intel worked with the open source community to optimize industry standard frameworks, including Caffe* and Theano*, so that customers can tap into the full performance of Intel technologies using their existing infrastructure. Customers using the optimized version of Caffe are now able to realize up to 30 times increase in performance compared to the mainstream version running on Intel architecture [3].

Machine Learning-Optimized Libraries

Intel optimized the widely used Intel® Math Kernel Library (Intel® MKL) for common machine learning primitives, allowing deeper access to optimized code through a standard set of APIs at no cost.

Resource Links:

Industry Perspectives

In this special guest feature, Assaf Katan, CEO & Co-Founder of Apertio, the Open Data deep search engine, suggests that there are huge social and financial benefits that businesses and economies can realize if they can successfully leverage Open Data. Despite this, there are still some hurdles for data professionals to leap. A great way to start is to consider whether your data meets the criteria for what’s known as the FAIR principles. These are Findability, Accessibility, Interoperability and Reusability. [READ MORE…]

White Papers

The value and benefits of a data catalog are often described as the ability for analysts to find the data they need quickly and efficiently. Data cataloging accelerates analysis by minimizing the time and effort that analysts spend finding and preparing data.