Data scientists who have been hearing a lot about Docker must be wondering whether it is, in fact, the best thing ever since sliced bread. If you too are wondering what the fuss is all about, or how to leverage Docker in your data science work (especially for deep learning projects) you’re in the right place. In this post, I present a short tutorial on how Docker can give your deep learning projects a jump start. In the process you will learn the basics of how to interact with Docker containers and create custom Docker images for your AI workloads. As a data scientist, I find Docker containers to be especially helpful as my development environment for deep learning projects for the reasons outlined below.

If you have tried to install and set up a deep learning framework (e.g. CNTK, Tensorflow etc.) on your machine you will agree that it is challenging, to say the least. The proverbial stars need to align to make sure the dependencies and requirements are satisfied for all the different frameworks that you want to explore and experiment with. Getting the right anaconda distribution,

We believe deep-learning frameworks are like languages: Sure, many people speak English, but each language serves its own purpose. We have created common code for several different network structures and executed it across many different frameworks. Our idea was to a create a Rosetta Stone of deep-learning frameworks – assuming you know one well, to help anyone leverage any framework. Situations may arise where a paper publishes code in another framework or the whole pipeline is in another language. Instead of writing a model from scratch in your favourite framework it may be easier to just use the “foreign” language.

We want to extend our gratitude to the CNTK, Pytorch, Chainer, Caffe2 and Knet teams, and everyone else from the open-source community who contributed to the repo over the past few months.

In summary, our goals with this release were to create:

A Rosetta Stone of deep-learning frameworks to allow data-scientists to easily leverage their expertise from one framework to another.

13

Mar

This post is authored by Rosane Maffei Vallim, Program Manager, and Wilson Lee, Senior Software Engineer at Microsoft.

Artificial Intelligence (AI) with deep learning and machine learning algorithms are changing the way we solve variety of problems from manufacturing to biomedical industries. The applications that can benefit from the power of AI are endless.

With the Windows Machine Learning (Windows ML) API, as .NET developers, we can now leverage the ONNX models that have been trained by data scientists and use them to develop intelligent applications that run AI locally. In this blog post, we will give an overview of what Windows ML can do for you; show you how to use ONNX in your UWP application; and introduce you to the Windows Machine Learning Explorer sample application that generically bootstraps ML models to allow users to dynamically select different models within the same application.

Last week Microsoft launched the Geo AI Data Science Virtual Machine (DSVM), an Azure VM type specially tailored to data scientists and analysts that manage geospatial data. To support the Geo AI DSVM launch, we are sharing sample code and methods for our joint land cover mapping project with the Chesapeake Conservancy and ESRI. We have used Microsoft’s Cognitive Toolkit (CNTK) to train a deep neural network-based semantic segmentation model that assigns land cover labels from aerial imagery. By reducing cost and speeding up land cover map construction, such models will enable finer-resolution timecourses to track processes like deforestation and urbanization. This blog post describes the motivation behind our work and the approach we’ve taken to land cover mapping. If you prefer to get started right away, please head straight to our GitHub repository to find our instructions and materials.

Motivation for the land cover mapping use case

The Chesapeake Conservancy is a non-profit organization charged with monitoring natural resources in the Chesapeake Bay watershed, a >165,000 square kilometer region

Today Microsoft is announcing the next major update to Windows will include the ability to run Open Neural Network Exchange (ONNX) models natively with hardware acceleration. This brings 100s of millions of Windows devices, ranging from IoT edge devices to HoloLens to 2-in-1s and desktop PCs, into the ONNX ecosystem. Data scientists and developers creating AI models will be able to deploy their innovations to this large user base. And every developer building apps on Windows 10 will be able to use AI models to deliver more powerful and engaging experiences.

ONNX is an open source model representation for interoperability and innovation in the AI ecosystem. We helped start ONNX last September, added support from many other companies, and launched ONNX 1.0 in December with Facebook and Amazon Web Services. With the ONNX format, developers can choose the right framework for their task, framework authors can focus on innovative enhancements and hardware vendors can streamline optimizations.

Thanks to ONNX-ML, Windows supports both classic machine learning and deep learning, enabling a spectrum of AI models and scenarios. Developers can obtain ONNX models to include in their apps

07

Mar

This blog post is co-authored by Xiaoyong Zhu, George Iordanescu and Ilia Karmanov, Data Scientists at Microsoft, and Mazen Zawaideh, Radiologist Resident at the University of Washington Medical Center.

Introduction

Artificial Intelligence (AI) has emerged as one of the most disruptive forces behind digital transformation that is revolutionizing the way we live and work. This applies to the field of healthcare and medicine too, where AI is accelerating change and empowering physicians to achieve more. At Microsoft, the Health NExT project is looking at innovative approaches to fuse research, AI and industry expertise to enable a new wave of healthcare innovations. The Microsoft AI platform empowers every developer to innovate and accelerate the development of intelligent apps. AI-powered experiences augment human capabilities and transform how we live, work, and play – and have enormous potential in allowing us to lead healthier lives.

Take the task of detecting diseases from chest x-ray images, for instance. This is a challenging task, one that requires consultation with an expert radiologist. However, two-thirds of the world’s population lacks access to trained radiologists, even when imaging equipment is readily available. The lack of image interpretation by experts may lead to delayed diagnosis and could potentially

05

Mar

This post is co-authored by the Microsoft Azure Machine Learning team, in collaboration with Databricks Machine Learning team.

Introduction

Apache Spark is being increasingly used for deep learning applications for image processing and computer vision at scale. Problems such as image classification or object detection are being solved using deep learning frameworks such as Cognitive Toolkit (CNTK), TensorFlow, BigDL and DeepLearning4J, and integrated into Spark through libraries such as MMLSpark or TensorFlowOnSpark. However, until now, there hasn’t been a common interface for importing images, or representing images in Spark DataFrames. Consequently, the different frameworks cannot easily communicate with each other or with core Spark components such as SparkML pipelines or Deep Learning pipelines. To overcome this problem, the Microsoft Azure Machine Learning Team collaborated with Databricks and the Apache Spark community to make images a first-class citizen in core Spark, based on existing industrial standards.

The success of enterprises in adopting AI to solve real-world problems hinges on bringing a comprehensive set of AI services, tools and infrastructure to every developer, so they can deliver AI-powered apps of the future that offer unique, differentiated and personalized experiences. In this blog post, we are sharing source code and other resources that demonstrate how easy it is for existing developers without deep AI expertise to start building such personalized data-driven app experiences of the future.

To start off, what are the key characteristics of such AI apps?We would expect them to be intelligent and get even smarter over time. We expect them to deliver new experiences, removing barriers between people and technology. And we want them to help us make sense of the huge amount of data that is all around us, to deliver those better experiences.

Let us take you through the journey of building such apps.

1. Infusing AI

Artificial Intelligence is anything that makes machines smart. Although machine learning is a technique for achieving that goal, what if, instead of creating custom ML models, you could directly start

I am excited to announce our latest Microsoft Machine Learning Server 9.3 release which addresses important requests from our users. With the foundational capabilities already in place, starting with this release, we have been iterating rapidly with our customers on the finer details that will accelerate the adoption and usage of SQL Server Machine Learning Services and Machine Learning Server. Key areas of investment in 9.3 are set-up and configuration of Operationalization, platform upgrades, better-together with Azure ML, support for local Spark, improved revoscalepy library, Linux R-Client support for SQL Server compute context, more partnerships and solution templates to quickly get you started and become effective at building and managing intelligent applications.

22

Feb

This post is the second in a three-part series by guest blogger, Adrian Rosebrock. Adrian writes at PyImageSearch.com about computer vision and deep learning using Python. He recently finished authoring a new book on deep learning for computer vision and image recognition.

Introduction

A few weeks ago I wrote a blog post on deep learning and computer vision in the Microsoft Azure cloud that was meant to be a gentle introduction to Microsoft’s Data Science Virtual Machine (DSVM). Today we’re going to get a bit more hands-on and practical, beginning with an email I received from PyImageSearch reader, Kostas:

“Hey Adrian, I’m interested in competing in Kaggle competitions (in particular the computer vision ones). I have some experience in computer vision and machine learning/deep learning but not a lot. Is it even worth my time? Do I even stand a chance against other competitors?”

Great question, Kostas — and I’m sure you’re not alone feeling this way.