Transcription

2 PUBLISHED BY Microsoft Press A Division of Microsoft Corporation One Microsoft Way Redmond, Washington Copyright 2014 Microsoft Corporation All rights reserved. No part of the contents of this book may be reproduced or transmitted in any form or by any means without the written permission of the publisher. ISBN: Microsoft Press books are available through booksellers and distributors worldwide. If you need support related to this book, Microsoft Press Book Support at Please tell us what you think of this book at Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft and the trademarks listed at Trademarks/EN-US.aspx are trademarks of the Microsoft group of companies. All other marks are property of their respective owners. The example companies, organizations, products, domain names, addresses, logos, people, places, and events depicted herein are fi ctitious. No association with any real company, organization, product, domain name, address, logo, person, place, or event is intended or should be inferred. This book expresses the authors views and opinions. The information contained in this book is provided without any express, statutory, or implied warranties. Neither the authors, Microsoft Corporation, nor its resellers, or distributors will be held liable for any damages caused or alleged to be caused either directly or indirectly by this book. Acquisitions, Developmental, and Project Editor: Devon Musgrave Editorial Production: Flyingspress and Rob Nance Copyeditor: John Pierce Cover: Twist Creative Seattle

3 Table of Contents Foreword... 5 Introduction... 7 Who should read this book... 7 Assumptions... 7 Who should not read this book... 8 Organization of this book... 8 Finding your best starting point in this book... 8 Book scenario... 9 Conventions and features in this book System requirements Sample data and code samples Working with sample data Using the code samples Acknowledgments Errata & book support We want to hear from you Stay in touch Chapter 1 Big data, quick intro A quick (and not so quick) definition of terms Use cases, when and why Tools and approaches scale up and scale out Hadoop HDFS MapReduce HDInsight Microsoft Azure Services Storage HDInsight service Interfaces Summary

4 Chapter 2 Getting started with HDInsight HDInsight as cloud service Microsoft Azure subscription Open the Azure Management Portal Add storage to your Azure subscription Create an HDInsight cluster Manage a cluster from the Azure Management Portal The cluster dashboard Monitor a cluster Configure a cluster Accessing the HDInsight name node using Remote Desktop Hadoop name node status Hadoop MapReduce status Hadoop command line Setting up the HDInsight Emulator HDInsight Emulator and Windows PowerShell Installing the HDInsight Emulator Using the HDInsight Emulator Name node status MapReduce job status Running the WordCount MapReduce job in the HDInsight Emulator Summary Chapter 3 Programming HDInsight Getting started MapReduce jobs and Windows PowerShell Hadoop streaming Write a Hadoop streaming mapper and reducer using C# Run the HDInsight streaming job Using the HDInsight.NET SDK Summary Chapter 4 Working with HDInsight data Using Apache Hive with HDInsight Upload the data to Azure Storage Use PowerShell to create tables in Hive Run HiveQL queries against the Hive table Using Apache Pig with HDInsight Using Microsoft Excel and Power Query to work with HDInsight data

6 Foreword One could certainly deliberate about the premise that big data is a limitless source of innovation. For me, the emergence of big data in the last couple of years has changed data management, data processing, and analytics more than at any time in the past 20 years. Whether data will be the new oil of the economy and provide as significant a lifetransforming innovation for dealing with data and change as the horse, train, automobile, or plane were to conquering the challenge of distance is yet to be seen. Big data offers ideas, tools, and engineering practices to deal with the challenge of growing data volume, data variety, and data velocity and the acceleration of change. While change is a constant, the use of big data and cloud technology to transform businesses and potentially unite customers and partners could be the source of a competitive advantage that sustains organizations into the future. The cloud and big data, and in particular Hadoop, have redefined common on-premises data management practices. While the cloud has improved broad access to storage, data processing, and query processing at big data scale and complexity, Hadoop has provided environments for exploration and discovery not found in traditional business intelligence (BI) and data warehousing. The way that an individual, a team, or an organization does analytics has been impacted forever. Since change starts at the level of the individual, this book is written to educate and inspire the aspiring data scientist, data miner, data analyst, programmer, data management professional, or IT pro. HDInsight on Azure improves your access to Hadoop and lowers the friction to getting started with learning and using big data technology, as well as to scaling to the challenges of modern information production. If you are managing your career to be more future-proof, definitely learn HDInsight (Hadoop), Python, R, and tools such as Power Query and Microsoft Power BI to build your data wrangling, data munging, data integration, and data preparation skills. Along with terms such as data wrangling, data munging, and data science, the big data movement has introduced new architecture patterns, such as data lake, data refinery, and data exploration. The Hadoop data lake could be defined as a massive, persistent, easily accessible data repository built on (relatively) inexpensive computer hardware for storing big data. The Hadoop data refinery pattern is similar but is more of a transient Hadoop cluster that utilizes constant cloud storage but elastic compute (turned on and off and scaled as needed) and often refines data that lands in another OLTP or analytics system such as a data warehouse, a data mart, or an in-memory analytics database. Data exploration is a sandbox pattern with which end users can work with developers (or use their own 5

7 development skills) to discover data in the Hadoop cluster before it is moved into more formal repositories such as data warehouses or data marts. The data exploration sandbox is more likely to be used for advance analysis for example, data mining or machine learning which a persistent data lake can also enable, while the data refinery is mainly used to preprocess data that lands in a traditional data warehouse or data mart. Whether you plan to be a soloist or part of an ensemble cast, this book and its authors (Avkash, Buck, Michele Val, and Wee-Hyong) should help you get started on your big data journey. So flip the page and let the expedition begin. Darwin Schweitzer Aspiring data scientist and lifelong learner 6

8 Introduction Microsoft Azure HDInsight is Microsoft s 100 percent compliant distribution of Apache Hadoop on Microsoft Azure. This means that standard Hadoop concepts and technologies apply, so learning the Hadoop stack helps you learn the HDInsight service. At the time of this writing, HDInsight (version 3.0) uses Hadoop version 2.2 and Hortonworks Data Platform 2.0. In Introducing Microsoft Azure HDInsight, we cover what big data really means, how you can use it to your advantage in your company or organization, and one of the services you can use to do that quickly specifically, Microsoft s HDInsight service. We start with an overview of big data and Hadoop, but we don t emphasize only concepts in this book we want you to jump in and get your hands dirty working with HDInsight in a practical way. To help you learn and even implement HDInsight right away, we focus on a specific use case that applies to almost any organization and demonstrate a process that you can follow along with. We also help you learn more. In the last chapter, we look ahead at the future of HDInsight and give you recommendations for self-learning so that you can dive deeper into important concepts and round out your education on working with big data. Who should read this book This book is intended to help database and business intelligence (BI) professionals, programmers, Hadoop administrators, researchers, technical architects, operations engineers, data analysts, and data scientists understand the core concepts of HDInsight and related technologies. It is especially useful for those looking to deploy their first data cluster and run MapReduce jobs to discover insights and for those trying to figure out how HDInsight fits into their technology infrastructure. Assumptions Many readers will have no prior experience with HDInsight, but even some familiarity with earlier versions of HDInsight and/or with Apache Hadoop and the MapReduce framework will provide a solid base for using this book. Introducing Microsoft Azure HDInsight assumes you have experience with web technology, programming on Windows machines, and basic 7

9 data analysis principles and practices and an understanding of Microsoft Azure cloud technology. Who should not read this book Not every book is aimed at every possible audience. This book is not intended for data mining engineers. Organization of this book This book consists of one conceptual chapter and four hands-on chapters. Chapter 1, Big data, quick overview, introduces the topic of big data, with definitions of terms and descriptions of tools and technologies. Chapter 2, Getting started with HDInsight, takes you through the steps to deploy a cluster and shows you how to use the HDInsight Emulator. After your cluster is deployed, it s time for Chapter 3, Programming HDInsight. Chapter 3 continues where Chapter 2 left off, showing you how to run MapReduce jobs and turn your data into insights. Chapter 4, Working with HDInsight data, teaches you how to work more effectively with your data with the help of Apache Hive, Apache Pig, Excel and Power BI, and Sqoop. Finally, Chapter 5, What next?, covers practical topics such as integrating HDInsight into the rest of your stack and the different options for Hadoop deployment on Windows. Chapter 5 finishes up with a discussion of future plans for HDInsight and provides links to additional learning resources. Finding your best starting point in this book The different sections of Introducing Microsoft Azure HDInsight cover a wide range of topics and technologies associated with big data. Depending on your needs and your existing understanding of Hadoop and HDInsight, you may want to focus on specific areas of the book. Use the following table to determine how best to proceed through the book. 8

10 If you are Follow these steps New to big data or Hadoop or HDInsight Focus on Chapter 1 before reading any of the other chapters. Familiar with earlier releases of HDInsight Skim Chapter 2 to see what s changed, and dive into Chapters 3 5. Familiar with Apache Hadoop Skim Chapter 1 for the HDInsight-specific content and dig into Chapter 2 to learn how Hadoop is implemented in Azure. Interested in the HDInsight Emulator Read the second half of Chapter 2. Interested in integrating your HDInsight cluster into Read through first half of Chapter 5. your organization Book scenario Swaddled in Sage Inc. (Sage, for short) is a global apparel company that designs, manufactures, and sells casual wear that targets male and female consumers 18 to 30 years old. The company operates approximately 1,000 retail stores in the United States, Canada, Asia, and Europe. In recent years, Sage started an online store to sell to consumers directly. Sage has also started exploring how social media can be used to expand and drive marketing campaigns for upcoming apparel. Sage is the company s founder. Natalie is Vice President (VP) for Technology for Sage. Natalie is responsible for Sage s overall corporate IT strategy. Natalie s team owns operating the online store and leveraging technology to optimize the company s supply chain. In recent years, Natalie s key focus is how she can use analytics to understand consumers retail and online buying behaviors, discover mega trends in fashion social media, and use these insights to drive decision making within Sage. Steve is a senior director who reports to Natalie. Steve and his team are responsible for the company-wide enterprise data warehouse project. As part of the project, Steve and his team have been investing in Microsoft business intelligence (BI) tools for extracting, transforming, and loading data into the enterprise data warehouse. In addition, Steve s team is responsible for rolling out reports using SQL Server Reporting Services and for building the OLAP cubes that are used by business analysts within the organization to interactively analyze the data by using Microsoft Excel. 9

11 In various conversations with CIOs in the fashion industry, Natalie has been hearing the term big data frequently. Natalie has been briefed by various technology vendors on the promise of big data and how big data analytics can help produce data-driven decision making within her organization. Natalie has been trying to figure out whether big data is market hype or technology that she can use to take analytics to the next level within Sage. Most importantly, Natalie wants to figure out how the various big data technologies can complement Sage s existing technology investments. She meets regularly with her management team, composed of Steve (data warehouse), Peter (online store), Cindy (business analyst), Kevin (supply chain), and Oliver (data science). As a group, the v-team has been learning from both technology and business perspectives about how other companies have implemented big data strategies. To kick-start the effort on using data and analytics to enable a competitive advantage for Sage, Natalie created a small data science team, led by Oliver, who has a deep background in mathematics and statistics. Oliver s team is tasked with turning the data in the organization into gold insights that can enable the company to stay competitive and be one step ahead. One of the top-of-mind items for Oliver and Steve is to identify technologies that can work well with the significant Microsoft BI investments that the company has made over the years. Particularly, Oliver and Steve are interested in Microsoft big data solutions, as using those solutions would allow their teams to take advantage of familiar tools (Excel, PowerPivot, Power View, and, more recently, Power Query) for analysis. In addition, using these solutions will allow the IT teams to use their existing skills in Microsoft products (instead of having to maintain a Hadoop Linux cluster). Having attended various big data conferences (Strata, SQLPASS Business Analytics Conference), Oliver and Steve are confident that Microsoft offers a complete data platform and big data solutions that are enterpriseready. Most importantly, they see clearly how Microsoft big data solutions can fit with their existing BI investment, including SharePoint. Join us in this book as we take you through Natalie, Oliver, and Steve s exciting journey to get acquainted with HDInsight and use Microsoft BI tools to deliver actionable insights to their peers (Peter, Cindy, and Kevin). Conventions and features in this book This book presents information using conventions designed to make the information readable and easy to follow. 10

12 Step-by-step instructions consist of a series of tasks, presented as numbered steps (1, 2, and so on) listing each action you must take to complete a task. Boxed elements with labels such as Note provide additional information. Text that you type (apart from code blocks) appears in bold. System requirements You need the following hardware and software to complete the practice exercises in this book: A Microsoft Azure subscription (for more information about obtaining a subscription, visit azure.microsoft.com and select Free Trial, My Account, or Pricing) A computer running Windows 8, Windows 7, Windows Server 2012, or Windows Server 2008 R2; this computer will be used to submit MapReduce jobs Office 2013 Professional Plus, Office 365 Pro Plus, the standalone version of Excel 2013, or Office 2010 Professional Plus.NET SDK Azure module for Windows PowerShell Visual Studio Pig, Hive, and Sqoop Internet connection to download software and chapter examples Depending on your Windows configuration, you might need local Administrator rights to install or configure Visual Studio and SQL Server 2008 products. Sample data and code samples You'll need some sample data while you're experimenting with the HDInsight service. And if we're going to offer some sample data to you, we thought we d make it something that is "real world," something that you'd run into every day and while we're at it, why not pick something you can implement in production immediately? 11

13 The data set we chose to work with is web logs. Almost every organization has a web server of one type or another, and those logs get quite large quickly. They also contain a gold mine of information about who visits the site, where they go, what issues they run into (broken links or code), and how well the system is performing. We also refer to a sentiment file in our code samples. This file contains a large volume of unstructured data that Sage collected from social media sources (such as Twitter), comments posted to their blog, and focus group feedback. For more information about sentiment files, and for a sample sentiment file that you can download, see Working with sample data Because we re writing about a Microsoft product, we used logs created by the web server in Windows Internet Information Server (IIS). IIS can use various formats, but you'll commonly see the Extended Log Format from W3C (http://www.w3.org/tr/wd-logfile.html) in use. This format is well structured, has good documentation, and is in wide use. Although we focus on this format in this book, you can extrapolate from the processes and practices we demonstrate with it for any web log format. In Chapter 4, we provide a link to a sample web log used in that chapter s examples. If you have a web server, you can use your own data as long as you process the fields you have included. Of course, don't interfere with anything you have in production, and be sure there is no private or sensitive data in your sample set. You can also set up a test server or virtual machine (VM), install a web server, initialize the logs, and then write a script to hit the server from various locations to generate real but nonsensitive data. That's what the authors of this book did. You can also mock up a web log using just those headers and fields. In fact, you can take the small sample (from Microsoft's documentation, available here: ) and add in lines with the proper fields by using your favorite text editor or word processor: #Software: Internet Information Services 6.0 #Version: 1.0 #Date: :42:15 #Fields: time c-ip cs-method cs-uri-stem sc-status cs-version 17:42: GET default.htm 200 HTTP/1.0 12

14 In general, the W3C format we used is a simple text file that has the following basic structure: #Software Name of the software that created the log file. For Windows, you'll see Internet Information Services followed by the IIS numbers. #Version The W3C version number of the log file format. #Date The date and time the log file was created. Note that these are under the control of the web server settings. They can be set to create multiple logs based on time, dates, events, or sizes. Check with your system administrator to determine how they set this value. #Fields Tells you the structure of the fields used in the log. This is also something the administrator can change. In this book, we're using the defaults from an older version of Windows Server, which include: Time of entry TCP/IP address of the client HTML method called Object called Return code Version of the web return call method Using the code samples Chapter 3 and Chapter 4 include sample Windows PowerShell scripts and C# code that you use to work with HDInsight. Download the code samples from Acknowledgments We d like to thank the following people for their help with the book: Avkash: I would like to dedicate this book to my loving family, friends, and coauthors, who provided immense support to complete this book. 13

15 Buck: I would like to thank my fellow authors on this work, who did an amazing amount of heavy lifting to bring it to pass. Special thanks to Devon Musgrave, who s patience is biblical. And, of course, to my wonderful wife, who gives me purpose in everything I do. Michele: I would like to thank my children, Aaron, Camille, and Cassie-Cassandra, for all the games of run-the-bases, broom hockey and wii obstacle course; all the baking of cookies; and all the bedtime stories that I had to miss while working on this book. Val: I would like to thank my wife, Veronica, and my lovely kids for supporting me through this project. It would not be possible without them, so I deeply appreciate their patience. Special thanks to my amazing coauthors Wee-Hyong Tok, Michele Hart, Buck Woody, and Avkash Chauhan and our editor Devon Musgrave for sharing in this labor of love. Wee-Hyong: Dedicated to Juliet, Nathaniel, Siak-Eng, and Hwee-Tiang for their love, support, and patience. Errata & book support We ve made every effort to ensure the accuracy of this book. If you discover an error, please submit it to us via You can also reach the Microsoft Press Book Support team for other support via the same alias. Please note that product support for Microsoft software and hardware is not offered through this address. For help with Microsoft software or hardware, go to We want to hear from you At Microsoft Press, your satisfaction is our top priority, and your feedback our most valuable asset. Please tell us what you think of this book at: We know you re busy, so we ve kept it short with just a few questions. Your answers go directly to the editors at Microsoft Press. (No personal information will be requested.) Thanks in advance for your input! 14

17 Chapter 1 Big data, quick intro These days you hear a lot about big data. It s the new term du jour for some product you simply must buy and buy now. So is big data really a thing? Is big data different from the regular-size, medium-size, or jumbo-size data that you deal with now? No, it isn t. It s just bigger. It comes at you faster and from more locations at one time, and you re being asked to keep it longer. Big data is a real thing, and you're hearing so much about it because systems are now capable of collecting huge amounts of data, yet these same systems aren t always designed to handle that data well. A quick (and not so quick) definition of terms Socrates is credited with the statement The beginning of wisdom is a definition of terms, so let s begin by defining some terms. Big data is commonly defined in terms of exploding data volume, increases in data variety, and rapidly increasing data velocity. To state it more succinctly, as Forrester s Brian Hopkins noted in his blog post Big Data, Brewer, and a Couple of Webinars (http://bit.ly/qtz69n): "Big data: techniques and technologies that make handling data at extreme scale economical." Expanding on that concept are the four Vs of extreme scale: volume, velocity, variety, and variability. Volume The data exceeds the physical limits of vertical scalability, implying a scaleout solution (vs. scaling up). Velocity The decision window is small compared with the data change rate. Variety Many different formats make integration difficult and expensive. Variability Many options or variable interpretations confound analysis. Typically, a big data opportunity arises when a solution requires you to address more than one of the Vs. If you have only one of these parameters, you may be able to use current technology to reach your goal. 16

18 For example, if you have an extreme volume of relationally structured data, you could separate the data onto multiple relational database management system (RDBMS) servers. You could then query across all the systems at once a process called sharding. If you have a velocity challenge, you could use a real-time pipeline feature such as Microsoft SQL Server StreamInsight or another complex event processing (CEP) system to process the data as it is transmitted from its origin to its destination. In fact, this solution is often the most optimal if the data needs to be acted on immediately, such as for alerting someone based on a sensor change in a machine that generates data. A data problem involving variety can often be solved by writing custom code to parse the data at the source or destination. Similarly, issues that involve variability can often be addressed by code changes or the application of specific business rules and policies. These and other techniques address data needs that involve only one or two of the parameters that large sets of data have. When you need to address multiple parameters variety and volume, for instance the challenge becomes more complicated and requires a new set of techniques and methods. Use cases, when and why So how does an organization know that it needs to consider an approach to big data? It isn t a matter of simply meeting one of the Vs described earlier, it s a matter of needing to deal with several of them at once. And most often, it s a matter of a missed opportunity the organization realizes the strategic and even tactical advantages it could gain from the data it has or could collect. Let s take a look at a couple of examples of how dealing with big data made a real impact on an organization. Way back in 2010, Kisalay Ranjan published a list of dozens of companies already working with large-scale data and described the most powerful ways they were using that data. In the years since, even more companies and organizations have started leveraging the data they collect in similar and new ways to enable insights, which is the primary goal for most data exercises. A lot of low-hanging-fruit use cases exist for almost any organization: Sentiment analysis Website traffic patterns Human resources employee data 17

19 Weather correlation effects Topographic analysis Sales or services analysis Equipment monitoring and data gathering These use cases might not apply to every business, but even smaller companies can use large amounts of data to coordinate sales, hiring, and deliveries and support multiple strategic and tactical activities. We've only just begun to tap into the vast resources of data and the insights they bring. These are just a few of the areas in which an organization might have a use case, but even when an organization does identify opportunities, its current technology might not be up to the task of processing it. So although it isn't a use case for big data, the use case for Hadoop, and by extension HDInsight, is to preprocess larger sets of data so that downstream systems can deal with them. At Microsoft we call this "making big rocks out of little rocks." Tools and approaches scale up and scale out For the case of extreme scale, or data volume, an inflection point occurs where it is more efficient to solve the challenge in a distributed fashion on commodity servers rather than increase the hardware inside one system. Adding more memory, CPU, network capacity, or storage to handle more compute cycles is called scale up. This works well for many applications, but in most of these environments, the system shares a common bus between the subsystems it contains. At high levels of data transfer, the bus can be overwhelmed with coordinating the traffic, and the system begins to block at one or more subsystems until the bus can deal with the stack of data and instructions. It s similar to a checkout register at a grocery store: an efficient employee can move people through a line faster but has to wait for the items to arrive, the scanning to take place, and the customer to pay. As more customers arrive, they have to wait, even with a fast checkout, and the line gets longer. This is similar to what happens inside a RDBMS or other single-processing data system. Storage contains the data to be processed, which is transferred to memory and computed by the CPU. You can add more CPUs, more memory, and a faster bus, but at some point the data can overwhelm even the most capable system. 18

20 Another method of dealing with lots of data is to use more systems to process the data. It seems logical that if one system can become only so large, that adding more systems makes the work go faster. Using more systems in a solution is called scale out, and this approach is used in everything from computing to our overcrowded grocery store. In the case of the grocery store, we simply add more cashiers and the shoppers split themselves evenly (more or less) into the available lanes. Theoretically, the group of shoppers checks out and gets out of the store more quickly. In computing it's not quite as simple. Sure, you can add more systems to the mix, but unless the software is instructed to send work to each system in an orderly way, the additional systems don't help the overall computing load. And there's another problem the data. In a computing system, the data is most often stored in a single location, referenced by a single process. The solution is to distribute not only the processing but the data. In other words, move the processing to the data, not just the data to the processing. In the grocery store, the "data" used to process a shopper is the prices for each object. Every shopper carries the data along with them in a grocery cart (we're talking precomputing-era shopping here). The data is carried along with each shopper, and the cashier knows how to process the data the shoppers carry they read the labels and enter the summations on the register. At the end of the evening, a manager collects the computed results from the registers and tallies them up. And that's exactly how most scale-out computing systems operate. Using a file system abstraction, the data is placed physically on machines that hold a computing program, and each machine works independently and in parallel with other machines. When a program completes its part of the work, the result is sent along to another program, which combines the results from all machines into the solution just like at the grocery store. So in at least one way, not only is the big data problem not new, neither is the solution! Hadoop Hadoop, an open-source software framework, is one way of solving a big data problem by using a scale-out "divide and conquer" approach. The grocery store analogy works quite well here because the two problems in big data (moving the processing to the data and then combining it all again) are solved with two components that Hadoop uses: the Hadoop Distributed File System (HDFS) and MapReduce. 19

21 It seems that you can't discuss Hadoop without hearing about where it comes from, so we'll spend a moment on that before we explain these two components. After all, we can't let you finish this book without having some geek credibility on Twitter! From the helpful article on Hadoop over at Wikipedia: Hadoop was created by Doug Cutting and Mike Cafarella in Cutting, who was working at Yahoo! at the time, named it after his son's toy elephant. It was originally developed to support distribution for the Nutch search engine project (http://en.wikipedia.org/wiki/hadoop#history). Hadoop is a framework, which means that it is composed of multiple components and is constantly evolving. The components in Hadoop can work separately, and often do. Several other projects also use the framework, but in this book we'll stick with those components available in (and to) the Microsoft Azure HDInsight service. HDFS The Hadoop Distributed File System (HDFS) is a Java-based layer of software that redirects calls for storage operations to one or more nodes in a network. In practice, you call for a file object by using the HDFS application programming interface (API), and the code locates the node where the data is located and returns the data to you. That's the short version, and, of course, it gets a little more complicated from there. HDFS can replicate the data to multiple nodes, and it uses a name node daemon to track where the data is and how it is (or isn't) replicated. At first, this was a single point of failure, but later releases added a secondary function to ensure continuity. So HDFS allows data to be split across multiple systems, which solves one problem in a large-scale data environment. But moving the data into various places creates another problem. How do you move the computing function to where the data is? MapReduce The Hadoop framework moves the computing load out to the data nodes through the use of a MapReduce paradigm. MapReduce refers to the two phases of distributed processing: a map phase in which the system determines where the nodes are located, moving the work to those nodes, and a reduce phase, where the system brings the intermediate results back together and computes them. Different engines implement these functions in different ways, but this loose definition will work for this chapter, and we'll refine it in later chapters as you implement the code in the various examples. 20

22 Hadoop uses a JobTracker process to locate the data and transfer the compute function and a TaskTracker to perform the work. All of this work is done inside a Java Virtual Machine (JVM). You can read a great deal more about the technical details of the Apache Hadoop project here: HDInsight The HDInsight service is a type of implementation of Hadoop that runs on the Microsoft Azure platform. Working with Hortonworks, Microsoft worked to properly license and source the code and contributes back to the Hadoop source project. HDInsight is 100 percent compatible with Apache Hadoop because it builds on the Hortonworks Data Platform (HDP). You could, of course, simply deploy virtual machines running Windows or one of several distributions of Linux on Azure and then install Hadoop on those. But the fact that Microsoft implements Hadoop as a service has several advantages: You can quickly deploy the system from a portal or through Windows PowerShell scripting, without having to create any physical or virtual machines. You can implement a small or large number of nodes in a cluster. You pay only for what you use. When your job is complete, you can deprovision the cluster and, of course, stop paying for it. You can use Microsoft Azure Storage so that even when the cluster is deprovisioned, you can retain the data. The HDInsight service works with input-output technologies from Microsoft or other vendors. As mentioned, the HDInsight service runs on Microsoft Azure, and that requires a little explaining before we proceed further. Microsoft Azure Microsoft Azure isn't a product, it's a series of products that form a complete cloud platform, as shown in Figure 1-1. At the very top of the stack in this platform are the data centers 21

23 where Azure runs. These are modern data centers, owned and operated by Microsoft using the Global Foundation Services (GFS) team that runs Microsoft properties such as Microsoft.com, Live.com, Office365.com, and others. The data centers are located around the world in three main regions: the Americas, Asia, and Europe. The GFS team is also responsible for physical and access security and for working with the operating team to ensure the overall security of Azure. Learn more about security here: The many products and features within the Microsoft Azure platform work together to allow you to do three things, using various models of computing: Write software Develop software on site using.net and open-source languages, and deploy it to run on the Azure platform at automatic scale. Run software Install software that is already written, such as SQL Server, Oracle, and SharePoint, in the Azure data centers. Use software Access services such as media processing (and, of course, Hadoop) without having to set up anything else. 22

24 FIGURE 1-1 Overview of the Microsoft Azure platform. Services Microsoft Azure started within a Platform as a Service, or PaaS model. In this model of distributed computing (sometimes called the cloud), you write software on your local system 23

25 using a software development kit (SDK), and when you're done, you upload the software to Azure and it runs there. You can write software using any of the.net languages or opensource languages such as JavaScript and others. The SDK runs on your local system, emulating the Azure environment for testing (only from the local system), and you have features such as caching, auto-scale-out patterns, storage (more on that in a moment), a full service bus, and much more. You're billed for the amount of services you use, by time, and for the traffic out of the data center. (Read more about billing here: The PaaS function of Azure allows you to write code locally and run it at small or massive scale or even small to massive scale. Your responsibility is the code you run and your data; Microsoft handles the data centers, the hardware, and the operating system and patching, along with whatever automatic scaling you've requested in your code. You can also use Azure to run software that is already written. You can deploy (from a portal, code, PowerShell, System Center, or even Visual Studio) virtual machines (VMs) running the Windows operating system and/or one of several distributions of Linux. You can run software such as Microsoft SQL Server, SharePoint, Oracle, or almost anything that will run inside a VM environment. This is often called Infrastructure as a Service or IaaS. Of course, in the IaaS model, you can also write and deploy software as in the PaaS model the difference is the distribution of responsibility. In the IaaS function, Microsoft handles the data centers and the hardware. You're responsible for maintaining the operating system and the patching, and you have to figure out the best way to scale your application, although there is load-balancing support at the TCP/IP level. In the IaaS model, you're billed by the number of VMs, the size of each, traffic out of the machine, and the time you keep them running. The third option you have on the Azure platform is to run software that Microsoft has already written. This is sometimes called Software as a Service or SaaS. This term is most often applied to services such as Microsoft Office 365 or Live.com. In this book we use SaaS to refer to a service that a technical person uses to further process data. It isn't something that the general public would log on to and use. The HDInsight service is an example of SaaS it's a simple-to-deploy cluster of Hadoop instances that you can use to run computing jobs, and when your computations are complete, you can leave the cluster running. turn it off, or delete it. The cost is incurred only while your cluster is deployed. We'll explore all this more fully in a moment. It's important to keep in mind that although all of the services that the Azure platform provides have unique capabilities and features, they all run in the same data center and can call each other seamlessly. That means you could have a PaaS application talking to a smartphone that uses storage that an internal system loads data into, process that data 24

26 using a VM in IaaS, and then process that data further in the HDInsight service and send along the results to yet another system using the service bus. You could just as easily use only the storage feature to retain backups from an onsite system for redundancy. Everything in Azure works together or as a standalone feature. Storage Although you can use any of the components in Azure to perform almost any computing task, the place to start for using the HDInsight service is with storage. You'll need a place to load your data and optionally a place to store the results. Let's explore the basics of Azure Storage and then cover the specifics of how HDInsight accesses it. The HDInsight service can actually access two types of storage: HDFS (as in standard Hadoop) and the Azure Storage system. When you store your data using HDFS, it's contained within the nodes of the cluster and it must be called through the HDFS API. When the cluster is decommissioned, the data is lost as well. The option of using Azure Storage provides several advantages: you can load the data using standard tools, retain the data when you decommission the cluster, the cost is less, and other processes in Azure or even from other cloud providers can access the data. Azure Storage comes in three basic types: blobs (binary storage), tables (key/value pair storage, similar to NoSQL architectures), and queues. Interestingly, a queue is a type of table storage, so there are technically two types of storage in Azure. All storage is kept within a container, which you create in a particular data center. Inside the container you can create a blob or a table. By default, Azure replicates all data three times within a data center for internal redundancy and can optionally replicate all copies to a separate geographical location. Blobs are further divided into two types: block and page. In general, you won't have to care about this distinction because, by default, HDInsight makes the decision about which type to use when it creates the data. The primary difference between the two is that block blobs are optimized for working with large files over a network, and page blobs are optimized for more random reads and writes. For the HDInsight service, you store data as blobs. In Chapter 2, "Getting started with HDInsight," you'll learn more about setting up your Azure account, creating your storage account, and loading data for processing. You can create the storage ahead of time or let the HDInsight setup process do that for you. If you create the storage account first, you should always save it in the same data center where you intend to create your HDInsight cluster. 25

27 HDInsight service The Azure HDInsight service is a fully integrated Apache Foundation Hadoop software project. This means that standard Hadoop concepts and technologies apply, so learning the Hadoop stack helps you learn the HDInsight service. To use the HDInsight service to run a job to do some work, you create your cluster, select the size, and access the Azure blob data you've loaded to your account. You can deploy the cluster from the Microsoft Azure portal (Figure 1-2) or use PowerShell scripts to set it all up. FIGURE 1-2 Use the Azure portal to create an HDInsight cluster. After the cluster is set up, you can use the Azure portal or PowerShell to submit and control jobs that run your code (more on that in the following chapters) from your desktop. When the jobs are complete, you can remove the cluster (and stop paying for it) via PowerShell. Note Not familiar with PowerShell? You can find a quick introduction to it here: 26

28 Of course, there's a bit more to HDInsight than just deploying it and running code. The real work is in submitting the jobs that HDInsight can run, and to do that you can work with one or more interfaces to the system. Interfaces Once you've sourced your data, loaded it into Azure Storage, and deployed a cluster on the HDInsight service, you need to set up the work you want HDInsight to do. You have a few options, from running jobs directly on the cluster to using other programs that connect to the cluster to perform the calculations. In this book, our use case includes using Pig, Hive,.NET, Java, and Excel. Pig One program you have available is a set of software called Pig. Pig is another framework (developers love creating frameworks). It implements the Pig Latin programming language, but many times you'll see both the platform and the language simply referred to as Pig. Using the Pig Latin language, you can create MapReduce jobs more easily than by having to code the Java yourself the language has simple statements for loading data, storing it in intermediate steps, and computing the data, and it's MapReduce aware. The official site for the Pig language is https://pig.apache.org/. You'll see an example of using Pig in Chapter 4, "Working with HDInsight data." Hive When you want to work with the data on your cluster in a relational-friendly format, you can use a software package called Hive. Hive allows you to create a data warehouse on top of HDFS or other file systems and uses a language called HiveQL, which has a lot in common with the Structured Query Language (SQL). Hive uses a metadata database, and we'll describe the process to work with it in more detail in Chapter 4. You can learn more about Hive at https://hive.apache.org/. Other interfaces and tools In addition to Pig and Hive, numerous other programs, interfaces, and tools are available for working with your HDInsight cluster. In this book, we use.net and Java in Chapter 3, "Programming HDInsight." Sqoop (not implemented as another framework) can load data to and from a RDBMS such as SQL Server or the Azure SQL Database. Sqoop is detailed at https://sqoop.apache.org/, and we cover it later in the book as well. 27

29 Although you can look at the data using multiple products, we can't leave out one of the world's most widely distributed data-analysis tools: Excel. That's right, Microsoft has engineered the HDInsight service to accept commands from Excel and return data so that you and your users can visualize, augment, combine, and calculate the results with other data in a familiar environment. We'll cover that process in more depth in Chapter 4. HDInsight Emulator But you don't have to deploy your jobs or even stand up a cluster on Azure to develop and test your code. Microsoft has an HDInsight Emulator that runs locally on your development system. Using this emulator, you're able to simulate the HDInsight environment in Azure locally and test and run your jobs before you deploy them. Of course, there are some differences and considerations for working in this environment, which we cover in more depth in Chapter 2. Summary When you add up all of the ways you can load data, process it with HDInsight interfaces, and then visualize it with any number of tools (from Microsoft and other vendors as well), you end up with a complete stack of technologies for working with big data from its source all the way to viewing the data in a spreadsheet without having to leave the Microsoft ecosystem. Of course, you're not locked in, either. You can use lots of technologies to work with the data at any stage of the ingress, processing, and viewing stages. Now let's take that sample data set referenced in the intro and get started! 28

30 Chapter 2 Getting started with HDInsight In this chapter, we explain how to get started, from setting up your Microsoft Azure account through loading some data into your account and setting up your first cluster. We also show you how to deploy and manage an HDInsight cluster, even using a free or low-cost account for testing and learning, and cover the on-premises testing emulator. HDInsight as cloud service Microsoft Azure is a cloud service provided by Microsoft, and one of the services offered is HDInsight, an Apache Hadoop-based distribution running in the cloud on Azure. Another service offered by Azure is blob storage. Azure Storage functions as the default file system and stores data in the cloud and not on-premises or in nodes. This way, when you are done running an HDInsight job, the cluster can be decommissioned (to save you money) while your data remains intact. The combination of Azure Storage and HDInsight provides an ultimate framework for running MapReduce jobs. MapReduce is the Hadoop data-processing framework used for parallel processing of data using multiple nodes. Creating an HDInsight cluster is quick and easy: log in to Azure, select the number of nodes, name the cluster, and set permissions. The cluster is available on demand, and once a job is completed, the cluster can be deleted but the data remains in Azure Storage. Having the data securely stored in the cloud before, after, and during processing gives HDInsight an edge compared with other types of Hadoop deployments. Storing the data this way is particularly useful in cases where the Hadoop cluster does not need to stay up for long periods of time. It is worth noting that some other usage patterns, such as the data exploration pattern (also known as the data lake pattern), require the Hadoop cluster and the data to be persisted at all times. In these usage patterns, users analyze the data directly on Hadoop, and for these cases, other Hadoop solutions, such as the Microsoft Analytics Platform System or Hortonworks Data Platform for Windows, are more suitable. 29

BIG DATA TRENDS AND TECHNOLOGIES THE WORLD OF DATA IS CHANGING Cloud WHAT IS BIG DATA? Big data are datasets that grow so large that they become awkward to work with using onhand database management tools.

Aligning Your Strategic Initiatives with a Realistic Big Data Analytics Roadmap 3 key strategic advantages, and a realistic roadmap for what you really need, and when 2012, Cognizant Topics to be discussed

Architecting the Future of Big Data Whitepaper Apache Hadoop: The Big Data Refinery Introduction Big data has become an extremely popular term, due to the well-documented explosion in the amount of data

1 Harnessing the Power of the Microsoft Cloud for Deep Data Analytics Today's Focus How you can operate your business more efficiently and effectively by tapping into Cloud based data analytics solutions

White Paper BIG DATA-AS-A-SERVICE What Big Data is about What service providers can do with Big Data What EMC can do to help EMC Solutions Group Abstract This white paper looks at what service providers

Technology Insight Paper Converged, Real-time Analytics Enabling Faster Decision Making and New Business Opportunities By John Webster February 2015 Enabling you to make the best technology decisions Enabling

WINDOWS AZURE DATA MANAGEMENT AND BUSINESS ANALYTICS Managing and analyzing data in the cloud is just as important as it is anywhere else. To let you do this, Windows Azure provides a range of technologies

Copyright 2014 Microsoft Corporation. All rights reserved. Except where otherwise noted, these materials are licensed under the terms of the Apache License, Version 2.0. You may use it according to the

Copyright 2013 Microsoft Corporation. All rights reserved. Except where otherwise noted, these materials are licensed under the terms of the Apache License, Version 2.0. You may use it according to the

W H I T E P A P E R Deriving Intelligence from Large Data Using Hadoop and Applying Analytics Abstract This white paper is focused on discussing the challenges facing large scale data processing and the

Using Tableau Software with Hortonworks Data Platform September 2013 2013 Hortonworks Inc. http:// Modern businesses need to manage vast amounts of data, and in many cases they have accumulated this data

Managing Big Data with Hadoop & Vertica A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database Copyright Vertica Systems, Inc. October 2009 Cloudera and Vertica

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after

BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14

Open Source Technologies on Microsoft Azure A Survey @DChappellAssoc Copyright 2014 Chappell & Associates The Main Idea i Open source technologies are a fundamental part of Microsoft Azure The Big Questions

The Future of Data Management with Hadoop and the Enterprise Data Hub Amr Awadallah (@awadallah) Cofounder and CTO Cloudera Snapshot Founded 2008, by former employees of Employees Today ~ 800 World Class

Big Data Are You Ready? Thomas Kyte http://asktom.oracle.com The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated

A Seminar report On Hadoop Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science SUBMITTED TO: www.studymafia.org SUBMITTED BY: www.studymafia.org

Disclaimer: This document is for informational purposes. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development,

WHITEPAPER OPEN MODERN DATA ARCHITECTURE FOR FINANCIAL SERVICES RISK MANAGEMENT A top-tier global bank s end-of-day risk analysis jobs didn t complete in time for the next start of trading day. To solve

343 Industries Gets New User Insights from Big Data in the Cloud Published: May 2013 The following content may no longer reflect Microsoft s current position or infrastructure. This content should be viewed

Windows Embedded Security and Surveillance Solutions Windows Embedded 2010 Page 1 Copyright The information contained in this document represents the current view of Microsoft Corporation on the issues

Integrating Hadoop Into Business Intelligence & Data Warehousing Philip Russom TDWI Research Director for Data Management, April 9 2013 TDWI would like to thank the following companies for sponsoring the

Where We Are Introduction to Data Management CSE 344 Lecture 25: DBMS-as-a-service and NoSQL We learned quite a bit about data management see course calendar Three topics left: DBMS-as-a-service and NoSQL

Microsoft SQL Server 2012 with Hadoop Debarchan Sarkar Chapter No. 1 "Introduction to Big Data and Hadoop" In this package, you will find: A Biography of the author of the book A preview chapter from the

THE WINDOWS AZURE PROGRAMMING MODEL DAVID CHAPPELL OCTOBER 2010 SPONSORED BY MICROSOFT CORPORATION CONTENTS Why Create a New Programming Model?... 3 The Three Rules of the Windows Azure Programming Model...

BI on Cloud using SQL Server on IaaS Abstract Today s Business Intelligence (BI) Systems are analysing huge volumes of data, which is growing at a rapid pace requiring organizations to scale the hardware/infrastructure

Realize More Success with Software-plus-Services Cloud-based software from Microsoft Dynamics ERP Cloud computing is Internet-based development and use of computer technology. Large central data centers

4 Overview A very short Intro to Hadoop photo by: exfordy, flickr 5 How to Crunch a Petabyte? Lots of disks, spinning all the time Redundancy, since disks die Lots of CPU cores, working all the time Retry,

WINDOWS AZURE EXECUTION MODELS Windows Azure provides three different execution models for running applications: Virtual Machines, Web Sites, and Cloud Services. Each one provides a different set of services,