Delip Rao explores natural language processing with deep learning, walking you through neural network architectures and NLP tasks and teaching you how to apply these architectures for those tasks.
Read more.

TensorFlow is an increasingly popular tool for deep learning. Dylan Bargteil offers an overview of the TensorFlow graph using its Python API. You'll start with simple machine learning algorithms and move on to implementing neural networks. Along the way, Dylan covers several real-world deep learning applications, including machine vision, text processing, and generative networks.
Read more.

Recurrent neural networks have proven to be very effective at analyzing time series or sequential data, so how can you apply these benefits to your use case? Tom Hanlon demonstrates how to use Deeplearning4j to build recurrent neural networks for time series data.
Read more.

BigDL is a powerful tool for leveraging Hadoop and Spark clusters for deep learning. Rich Ott offers an overview of BigDL’s capabilities through its Python interface, exploring BigDL's components and explaining how to use it to implement machine learning algorithms. You'll use your newfound knowledge to build algorithms that make predictions using real-world datasets.
Read more.

Richard Sargeant and Myles Kirby offer a condensed introduction to key AI and machine learning concepts and techniques, showing you what is (and isn't) possible with these exciting new tools and how they can benefit your organization.
Read more.

Onur Yilmaz walks you through the fundamentals of deep learning—training neural networks and using results to improve performance and capabilities. Once you’ve learned the basics, you'll apply deep learning to finance to make predictions and exploit arbitrage.
Read more.

Amy Unruh walks you through training a machine learning system using popular open source library TensorFlow, starting from conceptual overviews and building all the way up to complex classifiers. Along the way, you'll gain insight into deep learning and how it can be applied to complex problems in science and industry.
Read more.

Purpose, a well-defined problem, and trustworthiness are important factors to any system, especially those that employ AI. Chris Butler leads you through exercises that borrow from the principles of design thinking to help you create more impactful solutions and better team alignment.
Read more.

Bruno Gonçalves explores word2vec and its variations, discussing the main concepts and algorithms behind the neural network architecture used in word2vec and the word2vec reference implementation in TensorFlow. Bruno then presents a bird's-eye view of the emerging field of "anything"-2vec methods that use variations of the word2vec neural network architecture.
Read more.

Even as AI technologies move into common use, many enterprise decision makers remain baffled about what the different technologies actually do and how they can be integrated into their businesses. Rather than focusing on the technologies alone, Kristian Hammond provides a practical framework for understanding your role in problem solving and decision making.
Read more.

Greg Werner walks you through using MXNet and TensorFlow to train deep learning models and deploy them using the leading serverless compute services in the market: AWS Lambda, Google Cloud Functions, and Azure Functions. You'll also learn how to monitor and iterate upon trained models for continued success using standard development and operations tools.
Read more.

AI is a powerful tool, but often companies get more excited about their technology than in the customer value they’re creating. Radhika Dutt, Geordie Kaytes, and Nidhi Aggarwal share a framework for building customer-centered AI products. You'll learn how to craft a far-reaching vision and strategy centered around customer needs and balance that vision with the day-to-day needs of your company.
Read more.

Computer vision has led the artificial intelligence renaissance, and pushing it further forward is PyTorch, a flexible framework for training models. Mo Patel and Neejole Patel offer an overview of computer vision fundamentals and walk you through PyTorch code explanations for notable objection classification and object detection models.
Read more.

Ion Stoica, Robert Nishihara, and Philipp Moritz lead a deep dive into Ray, a new distributed execution framework for reinforcement learning applications, walking you through Ray's API and system architecture and sharing application examples, including several state-of-the art RL algorithms.
Read more.

Ignite is happening at AI in New York on Monday, April 30. Join us for a fun, high-energy evening of five-minute talks—all aspiring to live up to the Ignite motto: Enlighten us, but make it quick.
Read more.

Ready, set, network! Meet fellow attendees who are looking to connect at the AI Conference. We'll gather before Tuesday and Wednesday keynotes for an informal speed networking event. Be sure to bring your business cards—and remember to have fun.
Read more.

In this fireside chat, Justin Herz and Fiaz Mohammed discuss how artificial intelligence can improve content discovery and monetization. In collaboration with Intel AI technologies, Warner Bros. is just scratching the surface of what’s possible.
Read more.

Autonomy—consisting of extensive data processing, decision making and execution, and learning from experience—creates the need for a new interaction between humans and AI. Manuela Veloso delves into the roles humans can have in such interactions, as well as the underlying challenges to AI, in particular in terms of collaboration and interpretability.
Read more.

Food production needs to double by 2050 to feed the world’s growing population. Jennifer Marsman details a solution that uses sensors in the soil, aerial imagery from drones, machine learning, and networking research in television whitespaces and discusses the AI for Earth grant program, which supports similar work in the areas of clean water, agriculture, biodiversity, and climate change.
Read more.

The Intel AI portfolio includes hardware and software solutions that span use cases and edge-to-cloud implementations, rooted in extensive expertise in data science and research. Fiaz Mohamed explains how Intel AI solves today’s business problems and how Intel’s partner ecosystem is accelerating the adoption of solutions built on Intel technology.
Read more.

For more than 20 years, Amazon has invested in experimenting and deploying AI at scale. Dan Mbanga explores how accelerating AI experimentation has influenced innovations such as Amazon Alexa, Prime Air, and Go and how developers and data scientists from startups to large-scale enterprises have benefited from this innovation.
Read more.

AI will fundamentally change (and power) the way the world works together. So what does the future of AI in the enterprise look like? Faizan Buzdar explains how intelligence is being applied to enterprise content in practical ways that will revolutionize the most important business processes for companies of all sizes and across all industries.
Read more.

Scott Zoldi discusses innovations in explainable AI, such as Reason Reporter, which explains the workings of neural network models used to detect fraudulent payment card transactions in real time, and offers a comparative study with local interpretable model-agnostic explanations (LIME) that demonstrates why the former are better at providing explanations.
Read more.

Zoubin Ghahramani explores the foundations of the field of probabilistic, or Bayesian, machine learning and details current areas of research, including Bayesian deep learning, probabilistic programming, and the Automatic Statistician. Zoubin also explains how Uber organizes AI research and where probabilistic machine learning fits in.
Read more.

Intelligent applications learn from data to provide improved functionality to users. William Benton examines the confluence of two development revolutions: almost every exciting new application today is intelligent, and developers are increasingly deploying their work on container application platforms. Join William to learn how these two revolutions benefit one another.
Read more.

Few organizations have mastered integrating AI technology into their business processes and offerings, and many who want to don’t fully understand the work that lies ahead. David Kiron shares surprising insights about businesses’ appetite for and approach to AI, drawn from global collaborative research conducted by MIT Sloan Management Review and The Boston Consulting Group.
Read more.

Drawing on Affectiva's experience building a multimodal emotion AI that can detect human emotions from face and voice, Taniya Mishra outlines various deep learning approaches for building multimodal emotion detection. Along the way, Taniya explains how to mitigate the challenges of data collection and annotation and how to avoid bias in model training.
Read more.

Ben Vigoda offers an overview of idea learning, a new approach to deep learning that has been funded since 2013 as one of DARPA's largest investments in next-generation machine learning. Ben details the process of teaching machines with ideas instead of labeled data and demonstrates use cases with state-of-the-art performance on applications in unstructured enterprise data.
Read more.

Chatbots are having a moment, and banks across the world are utilizing them for everything from basic customer service to assisting internal IT support. But chatbots only skim the AI landscape. Brian Pearce explains how AI helps Wells Fargo use data in a smarter way, from developing custom experiences to uncovering new insights—with customers and employees at the center of it all.
Read more.

Danny Lange offers an overview of deep reinforcement learning—an exciting new chapter in AI’s history that is changing the way we develop and test learning algorithms that can later be used in real life—and explains how the crossroads between machine learning and gaming offers innovations that are applicable in other fields of technology, such as the robotics and automotive industries.
Read more.

Forecasting the long-term values of time series data is crucial for planning. But how do you make use of a recurrent neural network when you want to compute an accurate long-term forecast? How can you capture short- and long-term seasonality or discover small patterns from the data that generate the big picture? Mustafa Kabul shares a scalable technique addressing these questions.
Read more.

Deep learning has fueled the emergence of many practical applications and experiences. Meanwhile, container technologies have been maturing, allowing organizations to simplify the development and deployment of applications in various environments. Join Wee Hyong and Danielle Dean as they walk you through using the Cognitive Toolkit (CNTK) with Kubernetes clusters.
Read more.

Amazon SageMaker is a fully managed machine learning platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models in the cloud, at any scale. Randall Hunt offers an overview of SageMaker and demonstrates an end-to-end machine learning workflow by building an ML-powered Twitter bot that you can interact with in real time.
Read more.

Large enterprises struggle to apply deep learning and other machine learning technologies successfully because they lack the mindset, processes, or culture for an AI-first world. AI requires a radical shift. Kathryn Hume explores common failure models that hinder enterprise success and shares a framework for building an AI-first enterprise culture.
Read more.

Alex Jaimes explains how the cloud can be used effectively to deploy deep learning and the factors that allow you to do so cost effectively. Along the way, Alex shares examples of when and how to deploy deep learning in the cloud as well as the corresponding benefits, challenges, and opportunities.
Read more.

AI will change—and in some ways already is changing—the way we work, live, and play at a scale the world has never experienced. Join Tatiana Mejia to see how marketers, designers, and creative professionals can gain huge benefits in productivity, content scale, and workflow efficiencies while unleashing expanded career opportunities for workers in these industries.
Read more.

Expensify is using AI to streamline and improve customer service, reducing customer wait time from 15 hours to 3 minutes. David Barrett leads a deep dive into the process of building Concierge, a hybrid machine learning-driven chatbot, covering the challenges faced, results to date, and what he sees for the future of AI and customer service.
Read more.

The stock market is well known to be extremely random, making investment decisions difficult, but deep learning can help. Drawing on a concrete financial use case, Aurélien Géron explains how LSTM networks can be used for forecasting.
Read more.

Kaarthik Sivashanmugam and Wee Hyong Tok share recommendations to address the common challenges in enabling scalable and efficient distributed DNN training and the lessons learned in building and operating a large-scale training infrastructure.
Read more.

Episource is building a scalable NLP engine to help summarize medical charts and extract medical coding opportunities and their dependencies to recommend best possible ICD10 codes. Manas Ranjan Kar offers an overview of the wide variety of deep learning algorithms involved and the complex in-house training-data creation exercises that were required to make it work.
Read more.

Data scientists and machine learning professionals face a quandary of choices when trying to figure out how to scale their data science experiments. Arshak Navruzyan details the landscape of available options and explains how to make best use of the free and open source tools available.
Read more.

Daniel Raskin and Jonathan Greenberg explain what the extreme data economy is about and how machine learning advances along with accelerated parallel computing will play a key role in translating data into instant insight to power business in motion.
Read more.

Activity-based intelligence (ABI) is the art and science of understanding normal patterns of life to enhance the ability of a system to detect anomalous behavior (e.g., to identify cases of credit card fraud). Jamie Irza demonstrates how machine learning can be used to implement ABI for detecting threatening behavior from unmanned aerial systems, commonly known as drones.
Read more.

In manufacturing, software development, and aerospace, tech-op teams need to make critical decisions on the spot with very little information. In this session, presented by Intel Saffron, the speakers share actual use cases of cognitive AI-based applications helping technical professionals make more confident decisions to solve the pressing issues in their day-to-day work.
Read more.

Jan Neumann and Jeanine Heck explain how Comcast uses deep learning to build virtual assistants that allow its customers to contact the company with questions or concerns and how it uses contextual information about customers and systems in a reinforcement learning framework to identify the best actions that answer these customers' questions or resolve their concerns.
Read more.

What is the impact of AI and deep learning on clinical workflows? Enhao Gong and Greg Zaharchuk offer an overview of AI and deep learning technologies invented at Stanford and applied in the clinical neuroimaging workflow at Stanford Hospital, where they have provided faster, safer, cheaper, and smarter medical imaging and treatment decision making.
Read more.

Sameer Wadkar and Nabeel Sarwar explain how to seamlessly integrate model development and model deployment processes to enable rapid turnaround times from model development to model operationalization in high-velocity data streaming environments.
Read more.

Indigenous trackers all over the world can look at a single footprint in the dirt and intuitively know what animal species that print belongs to. Mary Beth Ainsworth explains how biologists, zoologists, machine learning and computer vision experts have come together to develop, automate, and scale a noninvasive approach to monitoring endangered wildlife by analyzing where animals have walked.
Read more.

Tolga Kurtoglu walks you through the advanced technology needed to implement cyberphysical systems, covering the right hardware to sense the right data, explainable AI, and designing security for trustworthy operations. Along the way, Tolga shares case studies and examples of advanced tech deployments.
Read more.

The adversarial nature of security makes applying machine learning complicated. If attackers can evade signatures and heuristics, what is stopping them from evading ML models? Yacin Nadji evaluates, breaks, and fixes a deployed network-based ML detector that uses graph clustering. While the attacks are specific to graph clustering, the lessons learned apply to all ML systems in security.
Read more.

New technologies make Bayesian inference and generative modeling more accessible to business analysts, but this also creates new communications challenges. Richard Tibbetts shares techniques for capturing domain knowledge and making findings actionable for decision makers utilizing the explanatory powers of transparent AI.
Read more.

Drawing on his experience leading two successful AI companies that implemented machine learning and NLP solutions in over a hundred organizations, Robbie Allen details patterns and characteristics of successful machine learning implementations (and those that predict failure) and explains how to build and cultivate ML talent within your organization in an increasingly competitive job market.
Read more.

Pensieve is a natural language processing (NLP) project that classifies reviews for their sentiment, reason for sentiment, high-level content, and low-level content. Megan Yetman offers an overview of Pensieve as well as ways to improve model reporting and the ability for continuous model learning and improvement.
Read more.

Entrusted with the financial data of 42 million customers, Intuit is in a unique position to take advantage of AI to solve some of its customers’ biggest financial pains. Ashok Srivastava discusses technology’s role in solving economic problems and details how Intuit is using its unrivaled financial dataset to power prosperity around the world.
Read more.

Analytic techniques leveraging artificial intelligence can result in dramatic improvements in crime detection and interdiction across diverse attack modalities. Will Griffith and Ben MacKenzie share AI models and operational techniques they’ve used with major banking clients to substantially strengthen and accelerate their responses to criminal attacks.
Read more.

AI is about more than automating tasks; it's about augmenting and extending human capabilities. James Guszcza discusses principles of human-computer collaboration, organizes them into a framework, and offers several real-life examples in which human-centered design has been crucial to the economic success of an AI project.
Read more.

Drawing on NVIDIA’s system for detecting anomalies on various NVIDIA platforms, Joshua Patterson and Aaron Sant-Miller explain how to bootstrap a deep learning framework to detect risk and threats in operational production systems, using best-of-breed GPU-accelerated open source tools.
Read more.

Drawing on his experience bringing AI to the public sector, Sumeet Vij offers perspectives on public sector AI trends, dispelling myths around barriers to entry and sharing approaches and opportunities as he highlights examples of successful AI adoptions.
Read more.

Advancements in computer vision are creating new opportunities across business verticals, from programs that help the visually impaired to extracting business insights from socially shared pictures, but the benefits of applied AI in computer vision are only beginning to emerge. Ophir Tanz explores the tools and image technology utilizing AI that you can apply to your business today.
Read more.

Historically, the consumer loan industry has restricted itself to using relatively simple machine learning models and techniques to accept or deny loan applicants. However, more powerful (but also more complicated) methods can significantly improve business outcomes. Sean Kamkar shares a framework for evaluating, explaining, and managing these more complex methods.
Read more.

Deep learning is the driving force behind the current AI revolution and will impact every industry on the planet. However, success requires an AI strategy. Chris Benson walks you through creating a strategy for delivering deep learning into production and explores how deep learning is integrated into a modern enterprise architecture.
Read more.

Fortune 500 companies are building conversational AI in-house to create a competitive edge. Alan Nichol shares a case study of a successful customer acquisition chatbot built by a large corporation and demonstrates how to build a useful, engaging conversational AI bot based entirely on machine learning using Rasa NLU and Rasa Core, the leading open source libraries for building conversational AI.
Read more.

Kayvaun Rowshankish and Alexis Trittipo explore the extent to which firms have addressed the EU's General Data Protection Regulation (GDPR) (the deadline being imminent) and how they might build further sustainability into their capabilities, especially through use of AI and other innovative technologies.
Read more.

Jake Porway explores AI’s true potential to impact the world in a positive way. Drawing on his experience as the head of DataKind, an organization applying AI for social good, Jake shares best practices, discusses the importance of using human-centered design principles, and addresses ethical concerns and challenges you may face in using AI to tackle complex humanitarian issues.
Read more.

Regardless of industry, every executive is concerned with the same thing: their customers. Omar Tawakol details the building blocks of speech technologies, including natural language processing, automatic speech recognition, and neural networks, that are necessary to implement voice-activated artificial intelligence and more importantly, enable a customer-centric enterprise.
Read more.

Ready, set, network! Meet fellow attendees who are looking to connect at the AI Conference. We'll gather before Tuesday and Wednesday keynotes for an informal speed networking event. Be sure to bring your business cards—and remember to have fun.
Read more.

Recent results show that machine learning has the potential to significantly alter the way basic data structures and algorithms are implemented and the performance they can provide. Tim Kraska explains the basic intuition behind learned data structures and outlines the potential consequences of this technology for industry.
Read more.

We live in a world of constantly changing business environments across various business units, limited end-to-end visibility, and high alerts. Abhijit Deshpande details how to use machine learning to identify root causes of problems in minutes instead of hours or days to free up valuable time by automating routine tasks without scripting or preprogramming.
Read more.

The extraordinary progress in AI over the last few years has been enabled, in part, by modern advancements in computing. Dario Gil explores state-of-the-art computing for AI as it exists today as well as an innovation that will lead us into the decades to come: quantum computing for AI.
Read more.

Everyone's Facebook news feed experience is unique and highly personalized. Meihong Wang explains how Facebook solves the personalization problem with machine learning techniques and offers an overview of its large-scale machine learning system that models every user and delivers them the most relevant content in real time.
Read more.

Thomas Reardon offers an overview of brain-machine interface (BMI) technology and shares CTRL-Labs's transformative and noninvasive neural interface approach. Along the way, he discusses the near-term opportunities for practical applications that will soon revolutionize daily life and the industries they touch.
Read more.

The IARPA MICrONS project aims to revolutionize machine learning by reverse-engineering the algorithms of the brain. George Church offers an overview of this work and explains how his team has accelerated in vitro growth of many brain architectures, which might enable us to build new hybrid bio-opto-electronic artificial computational platforms.
Read more.

Ron Bodkin explains how Google is using AI internally to enhance understanding and experiences for its digital customers and enabling external businesses, such as Spotify and Netflix, to do the same. Along the way, Ron shares examples of deep learning use cases that enable improved recommendations, help companies better understand their customers, and drive engagement in the customer lifecycle.
Read more.

We're all familiar with the highly publicized stories of algorithms displaying overtly biased behavior toward certain groups, but what actually happens behind the scenes, and how can these situations be avoided? Lindsey Zuloaga shares experiences and lessons learned in the hiring space to help others prevent unfair modeling and explains how to establish best practices.
Read more.

Superresolution is a process for obtaining one or more high-resolution images from one or more low-resolution observations. Xiaoyong Zhu shares the latest academic progress in superresolution using deep learning and explains how it can be applied in various industries, including healthcare. Along the way, Xiaoyong demonstrates how the training can be done in a distributed fashion in the cloud.
Read more.

Over the last five years, AI has become more capable thanks to the availability of data, algorithms, and models. Companies are exploring ways to leverage these advances, and soon AI technology will touch every industry worldwide. Dario Gil explores the challenges faced by companies building AI solutions for enterprise applications and areas of research required to drive this field forward.
Read more.

AI and its related subtechnologies are being introduced into operational decision making throughout the enterprise. The most promising and risky experiments involve the way people are selected and utilized, but the use of AI in HR raises the specter of software product liability. John Sumser offers an overview of the available use case solutions and the accompanying ethical issues.
Read more.

Determining abnormal conditions depends on maintaining a useful definition of normal. John Hebeler offers an overview of two deep learning methods to determine normal behavior, which when combined further improve performance.
Read more.

AI has already begun to demonstrate its value in large enterprises, even outside of Silicon Valley and other West Coast digital giants. Fortune 500 companies in industries like finance, manufacturing, travel, transportation, and pharmaceuticals have begun to leverage its power. Chad Meley shares insights from real-world client engagements using deep learning.
Read more.

In video games, players learn by failing, sometimes “dying” hundreds of times before learning how to succeed. By enabling us to simulate scenarios and predict outcomes, AI has essentially made the world like a game. Scott Weller explores the role of failure in machine learning, explaining how to set realistic expectations and sharing examples of good and bad AI deployments in the wild.
Read more.

Stephanie Kim discusses the basics of facial recognition and the importance of having diverse datasets when building out a model. Along the way, she explores racial bias in datasets using real-world examples and shares a use case for developing an OpenFace model for a celebrity look-alike app.
Read more.

Join in to explore MLPerf, a common benchmark suite for training and inference for systems from workstations through large-scale servers. In addition to ML metrics like quality and accuracy, MLPerf evaluate metrics such as execution time, power, and cost to run the suite.
Read more.

Tim Kraska explains how fundamental data structures can be enhanced using machine learning with wide-reaching implications even beyond indexes, arguing that all existing index structures can be replaced with other types of models, including deep learning models (i.e., learned indexes).
Read more.

Everyone's Facebook news feed experience is unique and highly personalized. In this extension of his keynote, Meihong Wang explains how Facebook solves the personalization problem with machine learning techniques and offers an overview of its large-scale machine learning system that models every user and delivers them the most relevant content in real time.
Read more.

Do you have constantly changing business environments across many business units and processes with multiple job schedulers and infrastructure platforms and struggle with end-to-end visibility and a lot of alerts? Award-winning ignio can help. Drawing on real-world examples, Jayanti Murty explains how ignio can reduce operational risks and outages, enabling you to more quickly adapt to change.
Read more.

AI scores points for providing better answers to your company's challenges and for requiring you to get your data house in order. Jana Eggers explains why AI's hat trick is how it can transform your company into a learning organization. Jana reviews the benefits of a learning org and details how to build an AI program that can support you in achieving those benefits.
Read more.

Harsh Kumar explains one way the energy industry is using AI and computer vision for security surveillance: a video analytics solution that can be optimized for the functional safety of workers in the loading and unloading zone of an oil and gas offshore rig.
Read more.

What are the latest initiatives and use cases around data and AI within different corporations and industries? How are data and AI reshaping different industries? What are some of the challenges of implementing AI within the enterprise setting? Michael Li moderates a panel of experts in different industries—including Lori Bieda, Saar Golde, Sassoon Kosian, and Len Usvyat—to answer these questions.
Read more.

As machine learning algorithms and artificial intelligence continue to progress, we must take advantage of the best techniques from various disciplines. Funda Gunes demonstrates how combining well-proven methods from classical statistics can enhance modern deep learning methods in terms of both predictive performance and interpretability.
Read more.

Over the last year, Steve Rennie and his colleagues have significantly advanced the state of the art in performance on two flagship challenges in AI: the Switchboard Evaluation Benchmark for Automatic Speech Recognition and the MSCOCO Image Captioning Challenge. Steve shares the innovations in deep learning research that have most advanced performance on these and other benchmark AI tasks.
Read more.

New technologies have the potential to revolutionize the aviation industry. Airports in particular are perfect candidates for AI and machine learning concepts. Carolina Sanchez Hernandez discusses how National Aviation Technical Services (NATS) is collaborating with several companies and institutes to change the way that data is captured and processed to transform airport operations.
Read more.

There are many enterprise AI use cases for automation and operational decision making, but when it comes to strategic decision making—especially for new products or when entering new markets—there are very few successful use cases. Anand Rao presents four successful use cases on gamifying strategy and applying agent-based simulation in the auto, payments, medical devices, and airlines industries.
Read more.

With all the buzz around machine learning, it can be difficult to distinguish what is disruptive from what is merely a marginal improvement. Rachel Silver shares a new taxonomy of machine learning approaches that categorizes both models and learning algorithms with respect to technical complexity and explains how to use it to identify approaches that provide compelling competitive advantage.
Read more.

TensorFlow Lite—TensorFlow’s lightweight solution for Android, iOS, and embedded devices—enables on-device machine learning inference with low latency and a small binary size. Kazunori Sato walks you through using TensorFlow Lite, helping you overcome the challenges to bring the latest AI technology to production mobile apps and embedded systems.
Read more.

Is your enterprise striving to build AI applications that produce transformative business value? Stephen Piron shares real-world examples of AI applications that are evolving the way enterprises work from the ground up as well as a framework for enterprise leaders to use to ensure their team’s AI initiatives lay the foundation for genuine business impact.
Read more.

In the last few years, RNNs have achieved significant success in modeling time series and sequence data, in particular within the speech, language, and text domains. Recently, these techniques have been begun to be applied to session-based recommendation tasks, with very promising results. Nick Pentreath explores the latest research advances in this domain, as well as practical applications.
Read more.

In a world of information overload and manipulation, knowledge acquisition techniques are expected to provide instant, precise, and succinct answers. Question-answering (QnA) systems must serve answers with high accuracy and be backed by strong verification techniques. Mridu Narang offers an overview of the challenges of and approaches taken by large-scale QnA systems.
Read more.

Yulia Tell and Maurice Nsabimana walk you through getting started with BigDL and explain how to write a deep learning application that leverages Spark to train image recognition models at scale. Along the way, Yulia and Maurice detail a collaborative project to design and train large-scale deep learning models using crowdsourced images from around the world.
Read more.

Brian Ray unveils the secrets behind the execution of Deloitte's framework for AI summarized in "Artificial Intelligence for the Real World," recently published in the January–February 2018 issue of Harvard Business Review. Join in to learn how to go from data to delivering real and measurable predictive value.
Read more.

Great AI products are more than technology; they are built on a clear (computationally tractable) model of customer success. Getting that model right can be more challenging than building the AI models themselves; and getting it wrong is very expensive. Shane Lewin outlines common pitfalls in defining AI products and explains how to organize teams to solve them.
Read more.

Chatbots are expected to make machine communication feel human, but high-quality bot experiences are very hard to build. Ofer Ronen explores the challenges in optimizing chatbots and shares ways for developers to address them quickly and efficiently.
Read more.

Sid Reddy shares Conversica's artificial intelligence approach to creating, deploying, and continuously improving an automated sales assistant that engages in a genuinely human conversation at scale with every one of an organization’s sales leads.
Read more.

Although AI technology seems to be everywhere, implementing AI in practice is a real challenge. The technology needs to be scalable, trusted by the humans that use it, and easily accessible for those with limited AI expertise. Nicole Eagan shares the unique insights on building practical and successful AI applications Darktrace has gained from its 4,000+ deployments.
Read more.

Predicting the target label for computer vision machine learning problems is not enough. You must also understand the why, what, and how of the categorization process. Pramit Choudhary offers an overview of ways to faithfully interpret and evaluate deep neural network models, including CNN image models to understand the impact of salient features in driving categorization.
Read more.

Across the globe, people are voicing their opinion online. However, sentiment analysis is challenging for many of the world's languages, particularly with limited training data. Gerard de Melo demonstrates how to exploit large amounts of surrogate data to learn advanced word representations that are custom-tailored for sentiment and shares a special deep neural architecture to use them.
Read more.

Mike Ranzinger shares his research on composition-aware search and explains how the research led to the launch of AI technology that allows Shutterstock’s users to more precisely find the image they need within the company's collection of more than 150 million images.
Read more.

Expanding his keynote, Thomas Reardon offers an overview of brain-machine interface (BMI) technology and shares CTRL-Labs's transformative and noninvasive neural interface approach. Along the way, he discusses the near-term opportunities for practical applications that will soon revolutionize daily life and the industries they touch.
Read more.

One of the most important aspects of deep learning is the quality and quantity of the data used in the learning process. Moses Guttmann explores the problem and offers approaches to solve it.
Read more.

Recent advances have made machines more autonomous, but much work remains for AI to collaborate with people. Emily Pavlini and Max Kleiman-Weiner share new insights inspired by the way humans accumulate knowledge and naturally work together that enable machines and people to work and learn as a team, discovering new knowledge in unstructured natural language content together.
Read more.

The AIY Projects kits bring Google's machine learning algorithms to developers with limited experience in the field, allowing them to prototype machine learning applications and smart hardware more easily. Alasdair Allan explains how to set up and build the kits and how to use the Python SDK to use machine learning both in the cloud and locally on the Raspberry Pi.
Read more.

The rise of AI has shown the importance of implementing the basic rules of democracy, human rights, and the rule of law into the innovation process and the programs of artificial intelligence by design and default. Paul Nemitz outlines justice-oriented AI development processes and shares a model for globally sustainable development and deployment of artificial intelligence in the future.
Read more.

Kathleen Kallot and Augustin Marty explain how Intel Movidius technology is reducing the complexity of developing custom circuit boards and allowing developers and companies to prototype AI applications with the Intel Movidius Neural Compute Stick. They also demonstrate how the newly announced Intel AI: In Production program makes it easier to bring these designs to market.
Read more.

DoorDash is a last-mile delivery platform, and its logistics engine powers fulfillment of every delivery on its three-sided marketplace of consumers, Dashers, and merchants. Raghav Ramesh highlights AI techniques used by DoorDash to enhance efficiency and quality in its marketplace and provides a framework for how AI can augment core operations research problems like the vehicle routing problem.
Read more.

The achievement of human-level accuracy in image classification through the use of modern AI algorithms has renewed interest in its application to automated protein crystallization imaging. Christopher Watkins explores the development of the deep tech pipeline required for the robust operation of an online classification system in CSIRO's GPU cluster and shares lessons learned along the way.
Read more.

The road to real-world AI is long and winding. All we've heard from reputable experts turned out to be true, including the need for better data, a new UX, and new ways of learning. To help you along the way, Rupert Steffner highlights lessons learned implementing cognitive AI applications to help consumers find the products they love.
Read more.

With the improvement of medical devices in the technological era, doctors have access to an enormous amount of unharnessed medical data. Artificial Intelligence is a tool that can be used to process this data to solve problems that are considered hard or impossible as a doctor. These AI tools is what Neeyanth used to help the field of diagnostics enter the digital age.
Read more.

Conversation is emerging as the next great human-machine interface. Ian Beaver and Cynthia Freeman outline the challenges faced by the AI industry to relate to humans in the way they relate to each other and highlight findings from a recent study to demonstrate relational strategies used by humans in conversation and explain how virtual assistants must evolve to communicate effectively.
Read more.

Recommender systems suffer from concept drift and scarcity of informative ratings. Jorge Silva explains how SAS uses a Bayesian approach to tackle both problems by making the learning process online and active. Active learning prioritizes the most informative users and items by quantifying uncertainty in a principled, probabilistic framework.
Read more.