TensorFlow is an increasingly popular tool for deep learning. We will introduce the TensorFlow graph, using its Python API, and demonstrate its use. Starting with simple machine-learning algorithms, we will move on to implementing neural networks. Several real-world deep-learning applications will be discussed, including machine vision, text processing, and generative networks.
Read more.

Delip Rao explores natural language processing with deep learning, walking you through neural network architectures and NLP tasks and teaching you how to apply these architectures for those tasks.
Read more.

Recurrent neural networks have proven to be very effective at analyzing time series or sequential data, so how can you apply these benefits to your use case? Josh Patterson, Susan Eraly, Dave Kale, and Tom Hanlon demonstrate how to use Deeplearning4j to build recurrent neural networks for time series data.
Read more.

BigDL is a powerful tool for leveraging Hadoop and Spark clusters for deep learning. This workshop will introduce participants to BigDL’s capabilities through its Python interface. This training will will first introduce students to the components of BigDL and and move on to how to implement machine learning algorithms, with a focus on neural networks.
Read more.

In this session, you will learn how to train a machine-learning system using TensorFlow, a popular open source ML library. Starting from conceptual overviews, we will build all the way up to complex classifiers. You’ll gain insight into deep learning and how it can apply to complex problems in science and industry.
Read more.

Purpose, a well-defined problem, and trust from people are important factors to any system, especially those that AI. Stealing from the principals of Design Thinking we have created exercises that lead to more impactful solutions and better team alignment. Attendees get first-hand experience running them during the session so they can take them back to their teams later.
Read more.

Bruno Gonçalves explores word2vec and its variations, discussing the main concepts and algorithms behind the neural network architecture used in word2vec and the word2vec reference implementation in TensorFlow. Bruno then presents a bird's-eye view of the emerging field of "anything"-2vec methods that use variations of the word2vec neural network architecture.
Read more.

This workshop provides a practical framework for understanding their role in problem-solving and decision-making, focusing on how they can be used, the requirements for doing so and the expectations for their effectiveness.
Read more.

This workshop will focus on how to use MXNet and TensorFlow to train deep learning models and deploy them using the leading serverless compute services in the market: AWS Lambda, Google Cloud Functions and Azure Functions. The workshop will also touch on how to monitor and iterate upon trained models for continued success using standard development and operations tools.
Read more.

AI is a powerful tool, but often companies get more excited about their technology than in the customer value they’re creating. In this talk, we introduce a practical framework for building customer-centered AI products. You will learn how to craft and communicate a far-reaching vision and strategy centered around customer needs, and balance that vision with the day-to-day needs of your company.
Read more.

Computer Vision has led the renaissance of Artificial Intelligence, pushing it forward is a flexible framework for training models called PyTorch. In this tutorial we will understand computer vision fundamentals and walk through PyTorch code explanations for notable objection classification and object detection models.
Read more.

Ion Stoica, Robert Nishihara, and Philipp Moritz lead a deep dive into Ray, a new distributed execution framework for reinforcement learning applications, walking you through Ray's API and system architecture and sharing application examples, including several state-of-the art RL algorithms.
Read more.

Artificial Intelligence is gearing up to be the most important technological advancement since the internet, yet to many people, it still feels like companies are years away real technological breakthroughs, especially for the enterprise. How are companies applying the technology today and what does the future of AI in the enterprise look like?
Read more.

Reason Reporter explains the workings of neural network models used to detect fraudulent payment card transactions in real time. Comparative study with Local Interpretable Model-agnostic Explanations (LIME) exposes why former is better at providing explanations. A novel patent pending architecture called LENNS would be discussed that exposes more of what’s driving the score.
Read more.

Intelligent applications learn from data to provide improved functionality to users. This talk will examine the confluence of two development revolutions: (1) Almost every exciting new application today is intelligent, and (2) developers are increasingly deploying their work on container application platforms. Learn how these two revolutions benefit one another!
Read more.

While expectations for AI are sky-high across industry and geography, few organizations have mastered integrating the technology into their business processes and offerings — and many who want to don’t fully understand the work that lies ahead.
Read more.

Affectiva is building multi-modal Emotion AI that can detect human emotions from face and voice. In this presentation Dr. Taniya Mishra will talk about how to build multi-modal emotion detection using various deep learning approaches, how to mitigate the challenges of data collection and annotation, and how to avoid bias in model training. She will also cover use cases.
Read more.

AI delivers value to many facets of the automotive value chain, e.g. in the domain of smart manufacturing, supply chain management and customer engagement. Learn about how to assess AI technologies, validate use cases and foster the fast adoption. This talk will cover our experiences and best practices from developing computer vision and natural language understanding applications.
Read more.

Chatbots are having a moment, with banks across the world utilizing them for everything from basic customer service to assisting internal IT support. But chatbots only skim the AI landscape. AI can help us use data in a smarter way, from developing custom experiences to uncovering new insights, with customers and employees at the center of it all.
Read more.

Forecasting the long term values of a time series data is crucial for planning. How do you make use of a Recurrent Neural Network when you want to compute an accurate long term forecast? How can you capture short and long term seasonality? Can you learn small patterns from the data that generate the big picture? This session will provide a scalable technique addressing these questions.
Read more.

Deep Learning has fueled the emergence of many practical applications and experiences. Meanwhile, container technologies have been maturing. Containers are allowing organizations to simplify the development and deployment of applications in various environments. Join Wee Hyong and Danielle Dean, as they share with you how to use the Cognitive Toolkit (CNTK) with Kubernetes clusters.
Read more.

While AI research is making breathtaking advances, large enterprises still struggle to apply deep learning & other machine learning technologies successfully because they lack the mindset, processes or culture for an AI-first world. AI requires a radical shift. This talk surveys common failure models that hinder enterprise success & provides a framework for building an AI-First Enterprise Culture.
Read more.

Repurpose NASA FDL Lunar Crater Identification solution as a NIPs showcase of using AI in space resource exploration and highlighting our collaboration with NASA FDL team. We will also provide do it yourself instructions and guides so that people can join in on the excitement and recreate their very own lunar crater detector at home
Read more.

There are major challenges when combining cutting-edge AI with real-world, practical applications for traditional industries like insurance, finance or agriculture. This session dives into the lessons learned from building practical and scalable enterprise AI solutions for insurance, finance, and agriculture.
Read more.

Entrusted with the financial data of 42 million customers, Intuit is in a unique position to take advantage of AI to solve some of their customers’ biggest financial pains. Join this session with Intuit Chief Data Officer Ashok Srivastava to learn more about technology’s role in solving economic problems, and how Intuit is using their financial data set to power prosperity around the world.
Read more.

In this session we will share our recommendations to address the common challenges in enabling scalable and efficient distributed DNN training and the lessons learned in building and operating a large-scale training infrastructure.
Read more.

At Episource, we work on building Deep Learning frameworks and architectures to help summarize a medical chart, extract medical coding opportunities and their dependencies to recommend best possible ICD10 codes. This not only required building a wide variety of deep learning algorithms to account for natural language variations but also fairly complex in-house training data creation exercises
Read more.

Data Scientists and Machine Learning professionals today face a quandary of choices when trying to figure out how to scale their data science experiments. This presentation will give attendees the information they need to better understand the landscape of options available to them and how to make best use of the free and open source tools available.
Read more.

We will describe how we use deep learning to build virtual assistants that allow our customers to contact us with questions or concerns, and how we use contextual information about our customers and systems in a reinforcement learning framework to identify the best actions that answer our customer's questions or resolve their concerns.
Read more.

Financial Econometric models are usually handcrafted using a combination of statistical methods, stochastic calculus and dynamic programming techniques. In this talk we will discuss how recent advancements in AI can help simplify financial model building by carefully replacing complex mathematics with a data driven incremental learning approach.
Read more.

We propose processes and systems to seamlessly integrate model development and model deployment processes to enable rapid turnaround times from model development to model operationalization in high velocity data streaming environments.
Read more.

Tolga will walk through the advanced technology that is needed to bring cyberphysical systems to reality today: the right hardware to sense the right data, explainable AI, designing security for trustworthy operations. He will walk the audience through a few case studies and advance tech deployments to illustrate his points.
Read more.

The adversarial nature of security makes applying machine learning complicated. If attackers can evade signatures and heuristics, what is stopping them from evading ML models? In this talk, we evaluate, break and fix a deployed network-based ML detector that uses graph clustering. While the attacks are specific to graph clustering, the lessons learned apply to all ML systems in security.
Read more.

We cover a general application of Deep Learning to a time series problem for a patient's history.
The goal was to impute a medical condition based on a multi-year history of prescriptions filled by an individual.
We benchmarked Deep Learning against traditional Machine Learning methods and found it more accurate with no feature engineering.
We describe a Keras implementation.
Read more.

From determining the most convenient rider pickup points to predicting the fastest routes, Uber uses data-driven machine learning to create seamless trip experiences.
The talk is about how Uber tackles data caching in large scale machine learning. We will talk about Uber machine learning architecture, how Uber uses big data to power machine learning, and how to use data caching to speedup AI jobs
Read more.

Drawing on NVIDIA’s system for detecting anomalies on various NVIDIA platforms, Joshua Patterson and Michael Balint will explain how to bootstrap a deep learning framework to detect risk and threats in operational production systems, using best-of-breed GPU-accelerated open source tools.
Read more.

Advancements in computer vision are creating new opportunities across business verticals, from programs that help the visually impaired to extracting business insights from socially shared pictures. The benefits of applied AI in computer vision are only beginning to emerge. This session will explain the tools and image technology utilizing AI that you can apply to your business today.
Read more.

This talk will describe how to evaluate and manage the complexity of machine learning to meet regulatory requirements for consumer loans. This will be an interesting and generally applicable subject to anyone working in a regulated industry or high-risk field, as it relates a seemingly abstract machine learning concept (explainability) directly to a critical business requirement.
Read more.

Deep learning is AI that works today - the driving force behind the current AI revolution. Deep learning will impact every industry on the planet, and there will be countless opportunities to take advantage of it. Success requires an AI strategy. We will address strategy for delivering deep learning into production, and explore how deep learning is integrated into a modern enterprise architecture.
Read more.

In large multi-national companies, you inevitably have to deal with large, complex distributed systems and data. You may have offerings that are marketed in the EU (customer data) as well as work-forces based in European countries (employee data). In complex systems, meeting the GDPR regulations may require the creation of a new distributed architecture to handle GDPR requests.
Read more.

DataKind Founder Jake Porway will help shed light on AI’s true potential to impact the world in a positive way. As the head of an organization applying AI for social good, Jake will share best practices, discuss the importance of using human-centered design principles, and address ethical concerns and challenges one may face in using AI to tackle complex humanitarian issues.
Read more.

AI innovation in Deep Learning has moved from labs to large-scale deployments at Google. Hear techniques and lessons from Google, Spotify, Netflix, Evernote and eBay. Learn how to apply AI for personalized recommendations, intelligent bot support and enhancing the digital experience. Hear how to overcome common pitfalls in dealing with data, automation, experimentation.
Read more.

We are all familiar with the highly-publicized stories of algorithms displaying overtly biased behavior towards certain groups. What is actually happening behind the scenes and how can these situations be avoided? In this session, Lindsey Zuloaga (HireVue) will share experiences and lessons learned in the hiring space to help others avoid unfair modeling and work to establish best practices.
Read more.

In video games, players learn by failing. They might “die” hundreds of times before learning how to succeed. By enabling us to simulate scenarios and predict outcomes, AI has essentially made the world like a game that we can play with, yet we expect immediate success. Is this realistic?
Technologist Scott Weller explores the role of failure in machine learning using real-world examples.
Read more.

This talk will cover the basics of facial recognition and the importance of having diverse datasets when building out a model. We’ll explore racial bias in datasets using real world examples and cover a use case for developing an OpenFace model for a celebrity look-a-like app.
Read more.

In this talk, we will demonstrate the latest academic progress in Super Resolution field using Deep Learning, especially GANs, and how it can be applied in various industries, such as Medical area; we will also showcase how the training could be done in a distributed fashion on the Cloud, which gives more torch light to researchers and data scientists who are in this empirical field.
Read more.

AI scores points for providing better answers to your company's challenges. It also gets points for requiring you to get your data house in order. I think AI's hat trick is how it can transform your company into a learning organization. I'll review the benefits of a learning org, as well as the key aspects, then show you how to build an AI program that can support you in achieving those benefits.
Read more.

As machine learning algorithms and artificial intelligence continue to progress, we must take advantage of the best techniques from various disciplines. In this presentation, I will show how combining well-proven methods from classical statistics can enhance modern deep learning methods in terms of both predictive performance and interpretability.
Read more.

The aviation industry is awakening to new technologies that can enhance the way the industry operates and evolve. Airports in particular are perfect test beds for AI and Machine Learning concepts. The recent partnership between NATS in the UK and Searidge Technologies in Canada aims to revolutionise the way data is captured and processed using machine learning and AI within the airport.
Read more.

TensorFlow Lite is TensorFlow’s lightweight solution for Android, iOS and embedded devices. It enables on-device machine learning inference with low latency and a small binary size. In this session, we will discuss how developers can use TensorFlow Lite to overcome the challenges for bringing the latest AI technology to production mobile apps and embeded systems.
Read more.

Learn from an entrepreneur behind two successful AI companies who implemented machine learning and NLP solutions in over a hundred organizations. Find out the factors common to successful machine learning implementations and which factors predict failure. Finally, you will learn how to build and cultivate ML talent within your organization in an increasingly competitive job market.
Read more.

In the last few years, RNNs have achieved significant success in modeling time-series and sequence data, in particular within the speech, language and text domains.
Recently, these techniques have been begun to be applied to session-based recommendation tasks, with very promising results. In this talk I explore the newest research advances in this domain, as well as practical applications.
Read more.

Knowledge acquisition techniques for users in a world of information overload and information manipulation are expected to provide instant, precise and succinct answers. Question Answering Systems are faced with the challenges of serving answers with high accuracy and backed by strong verification techniques. This talk offers an overview of challenges & approaches of such large scale QnA systems.
Read more.

BigDL enables Deep Learning frameworks natively for Apache Spark. Using BigDL, we created a new app to demonstrates how you can use image recognition. To demonstrated Deep Learning in Apache Spark, we developed an app called VegNonVeg. The app uses BigDL framework to classify images of food as vegitarian or non-vegitarian.
Read more.

Great AI products are more than technology; they are built on a clear (computationally tractable) model of customer success. Getting that model right can more challenging than building the AI models themselves; and getting it wrong is very expensive.
In this talk we cover common pitfalls in defining AI products, and organizing teams to solve them; and talk through emerging best practices.
Read more.

Ofer Ronen (Chatbase (via Area 120; Area 120 is an incubator for early-stage products operated by Google))

Chatbots are expected to make machine communication feel human. But high-quality bot experiences are very hard to build. Certain issues in particular make building bots that do not frustrate users difficult. This presentation will delve into such issues and suggest ways, including machine learning, for developers to save time addressing them.
Read more.

Although AI technology seems to be everywhere, implementing AI in practice is a real challenge. The technology needs to be scalable, trusted by the humans that use them, and easily accessible for those with limited AI expertise. With over 4 years’ experience and 4,000 deployments, Darktrace has unique insights into how to develop and deploy both practical and successful AI applications.
Read more.

Just predicting the target label for computer vision machine learning problem is not enough. It is also important to understand the “why”, “what” and “how” about the categorization process. In this talk we explore different ways to faithfully interpret and evaluate Deep Neural Network models - CNN Image models to understand the impact of salient features in driving categorization.
Read more.

Across the globe, people are voicing their opinion online. However, sentiment analysis is challenging for many of the world's languages, particularly with limited training data. This talk shows how one can instead exploit large amounts of surrogate data to learn advanced
word representations that are custom-tailored for sentiment, and presents a special deep neural architecture to use them.
Read more.

Mike Ranzinger of Shutterstock will detail his research on composition aware search. He will demonstrate how the research led to the launch of AI technology allowing users to more precisely find the image they need within Shutterstock’s collection of more than 150 million images. The goal of this research was to improve the future of search using visual data, contextual search functions, and AI.
Read more.

Recent advances have made machines more autonomous. However, much remains to be done in order for AI collaborate with people. Drawing inspiration from the way humans accumulate knowledge and naturally work together, we will share new insights that enable machines and people to work and learn as a team, discovering new knowledge in unstructured natural language content together.
Read more.

The AIY Projects kits bring Google's machine learning algorithms to developers with limited experience in the field, allowing them to prototype machine learning applications and smart hardware more easily. We walkthrough how to setup and build the kits, and how to use the kits Python SDK to use machine learning both in the cloud, and locally on the Raspberry Pi.
Read more.

The advent of Artifical Intelligence requires a new innovation model. With the learning, judging and deciding machine becoming ever more pervasive, it is necessary to insert by design and default the basic rules of democracy, human rights and the rule of law into the innovation process and the programmes of Articial Intelligence. The presentation gives examples on how this can be done.
Read more.

The achievement of human level accuracy in image classification through the use of modern AI algorithms has renewed interest its application to automated protein crystallisation imaging. This session aims to discuss the development of the deep tech pipeline required for the robust operation of an on-line classification system in CSIRO's GPU cluster and the lessons learned along the way.
Read more.

The way to real-world AI is a long and winding road. All what we heard from reputable experts turned out to be true: The need for better data, a new UX, new ways of learning and many more. This session highlights the lessons we have learnt while implementing cognitive AI applications to help consumers finding the products they love. With evidence what to expect in case you build AI too lean.
Read more.

As conversation emerges as the next great human-machine interface, we discuss the challenges faced by the AI industry to relate to humans in the way they relate to each other. Highlighting findings from a recent study we demonstrate relational strategies used by humans in conversation and how Virtual Assistants must evolve to communicate effectively.
Read more.

Recommender systems suffer from concept drift and scarcity of informative ratings - we use a Bayesian approach to tackle both problems by making the learning process online and active. Online learning deals with concept drift, i.e., changing user preferences. Active learning prioritizes the most informative users and items by quantifying uncertainty in a principled, probabilistic framework.
Read more.