Search results matching tags 'Developer' and 'Azure'http://sqlblog.com/search/SearchResults.aspx?o=DateDescending&tag=Developer,Azure&orTags=0Search results matching tags 'Developer' and 'Azure'en-USCommunityServer 2.1 SP2 (Build: 61129.1)Windows Azure End to End Exampleshttp://sqlblog.com/blogs/buck_woody/archive/2012/05/29/windows-azure-end-to-end-examples.aspxTue, 29 May 2012 13:45:59 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:43642BuckWoody<p>I’m fascinated by the way people learn. I’m told there are several methods people use to understand new information, from reading to watching, from experiencing to exploring. </p> <p>Personally, I use multiple methods of learning when I encounter a new topic, usually starting with reading a bit about the concepts. I quickly want to put those into practice, however, especially in the technical realm. I immediately look for examples where I can start trying out the concepts. But I often want a “real” example – not just something that represents the concept, but something that is real-world, showing some feature I could actually use. </p> <p>And it’s no different with the Windows Azure platform – I like finding things I can do now, and actually use. So when I started learning Windows Azure, <a href="http://www.microsoft.com/en-us/download/details.aspx?id=8396" target="_blank">I of course began with the Windows Azure Training Kit</a> – which has lots of examples and labs, presentations and so on. But from there, I wanted more examples I could learn from, and eventually teach others with. I was asked if I would write a few of those up, so here are the ones I use. </p> <h2>CodePlex</h2> <p><a href="http://www.codeplex.com/" target="_blank">CodePlex is Microsoft’s version of an “Open Source” repository</a>. Anyone can start a project, add code, documentation and more to it and make it available to the world, free of charge, using various licenses as they wish. Microsoft also uses this location for most of the examples we publish, and sample databases for SQL Server. </p> <p>If you search in CodePlex for “Azure”, you’ll come back with a list of projects that folks have posted, including those of us at Microsoft. The source code and documentation are there, so you can learn using actual examples of code that will do what you need. There’s everything from a simple table query to <a href="http://blobshare.codeplex.com/" target="_blank">a full project that is sort of a “Corporate Dropbox” that uses Windows Azure Storage</a>. </p> <p>The advantage is that this code is immediately usable. It’s searchable, and you can often find a complete solution to meet your needs. The disadvantage is that the code is pretty specific – it may not cover a huge project like you’re looking for. Also, depending on the author(s), you might not find the documentation level you want. </p> <p><strong><em>Link: <a href="http://azureexamples.codeplex.com/site/search?query=Azure&amp;ac=8">http://azureexamples.codeplex.com/site/search?query=Azure&amp;ac=8</a>&#160;</em></strong></p> <p>&#160;</p> <h2>Tailspin</h2> <p><a href="http://msdn.microsoft.com/en-us/practices/default" target="_blank">Microsoft Patterns and Practices</a> is a group here that does an amazing job at sharing standard ways of doing IT – from operations to coding. If you’re not familiar with this resource, make sure you read up on it. Long before I joined Microsoft I used their work in my daily job – saved a ton of time. It has resources not only for Windows Azure but other Microsoft software as well. </p> <p>The Patterns and Practices group also publishes full books – you can buy these, but many are also online for free. There’s an end-to-end example for Windows Azure using a company called “Tailspin”, and the work covers not only the code but the design of the full solution. If you really want to understand the thought that goes into a Platform-as-a-Service solution, this is an excellent resource. </p> <p>The advantages are that this is a book, it’s complete, and it includes a discussion of design decisions. The disadvantage is that it’s a little over a year old – and in “Cloud” years that’s a lot. So many things have changed, improved, and have been added that you need to treat this as a resource, but not the only one. Still, highly recommended. </p> <p><strong><em>Link: <a href="http://msdn.microsoft.com/en-us/library/ff728592.aspx">http://msdn.microsoft.com/en-us/library/ff728592.aspx</a></em></strong></p> <h2>Azure Stock Trader</h2> <p>Sometimes you need a mix of a CodePlex-style application, and a little more detail on how it was put together. And it would be great if you could actually play with the completed application, to see how it really functions on the actual platform.</p> <p>That’s the Azure Stock Trader application. There’s a place where you can read about the application, and then it’s been published to Windows Azure – the production platform – and you can use it, explore, and see how it performs. </p> <p>I use this application all the time to demonstrate Windows Azure, or a particular part of Windows Azure.</p> <p>The advantage is that this is an end-to-end application, and online as well. The disadvantage is that it takes a bit of self-learning to work through.&#160; </p> <p><strong><em>Links: Learn it: <a href="http://msdn.microsoft.com/en-us/netframework/bb499684">http://msdn.microsoft.com/en-us/netframework/bb499684</a> Use it: <a href="https://azurestocktrader.cloudapp.net/">https://azurestocktrader.cloudapp.net/</a></em></strong></p>Big Data - A Microsoft Tools Approachhttp://sqlblog.com/blogs/buck_woody/archive/2012/02/20/big-data-a-microsoft-tools-approach.aspxMon, 20 Feb 2012 21:16:00 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:41832BuckWoody<p><em><span style="color:#c0504d;">(As with all of these types of posts, check the date of the latest update I&rsquo;ve made here. Anything older than 6 months is probably out of date, given the speed with which we release new features into Windows and SQL Azure)</span></em></p>
<p>I don&rsquo;t normally like to discuss things in terms of tools. I find that whenever you start with a given tool (or even a tool stack) it&rsquo;s too easy to fit the problem to the tool(s), rather than the other way around as it should be.</p>
<p>That being said, it&rsquo;s often useful to have an example to work through to better understand a concept. But like many ideas in Computer Science, &ldquo;Big Data&rdquo; is too broad a term in use to show a single example that brings out the multiple processes, use-cases and patterns you can use it for.</p>
<p>So we turn to a description of the tools you can use to analyze large data sets. &ldquo;Big Data&rdquo; is a term used lately to describe data sets that have the &ldquo;<a href="http://radar.oreilly.com/2012/01/what-is-big-data.html" target="_blank">Four V&rsquo;s</a>&rdquo;&nbsp; as a characteristic, but I have a simpler definition I like to use:</p>
<p align="center"><em><span style="color:#0000ff;font-size:small;">Big Data involves a data set too large to process in a reasonable period of time</span></em></p>
<p>I realize that&rsquo;s a bit broad, but in my mind it answers the question and is fairly future-proof. The general idea is that you want to analyze some data, and using whatever current methods, storage, compute and so on that you have at hand it doesn&rsquo;t allow you to finish processing it in a time period that you are comfortable with. I&rsquo;ll explain some new tools you can use for this processing.</p>
<p>Yes, this post is Microsoft-centric. There are probably posts from other vendors and open-source that cover this process in the way they best see fit. And of course you can always &ldquo;mix and match&rdquo;, meaning using Microsoft for one or more parts of the process and other vendors or open-source for another. I never advise that you use any one vendor blindly - educate yourself, examine the facts, perform some tests and choose whatever mix of technologies best solves your problem.</p>
<p>At the risk of being vendor-specific, and probably incomplete, I use the following short list of tools Microsoft has for working with &ldquo;Big Data&rdquo;. There is no single package that performs all phases of analysis. These tools are what I use; they should not be taken as a Microsoft authoritative testament to the toolset we&rsquo;ll finalize for a given problem-space. In fact, that&rsquo;s the key: find the problem and then fit the tools to that.</p>
<h2>Process Types</h2>
<p>I break up the analysis of the data into two process types. The first is examining and processing the data <em>in-line</em>, meaning as the data passes through some process. The second is a <em>store-analyze-present</em> process.</p>
<h2>Processing Data In-Line</h2>
<p>Processing data in-line means that the data doesn&rsquo;t have a destination - it remains in the source system. But as it moves from an input or is routed to storage within the source system, various methods are available to examine the data as it passes, and either trigger some action or create some analysis.</p>
<p>You might not think of this as &ldquo;Big Data&rdquo;, but in fact it can be. Organizations have huge amounts of data stored in multiple systems. Many times the data from these systems do not end up in a database for evaluation. There are options, however, to evaluate that data real-time and either act on the data or perhaps copy or stream it to another process for evaluation.</p>
<p>The advantage of an in-stream data analysis is that you don&rsquo;t necessarily have to store the data again to work with it. That&rsquo;s also a disadvantage - depending on how you architect the solution, you might not retain a historical record. One method of dealing with this requirement is to trigger a rollup collection or a more detailed collection based on the event.</p>
<p><strong>StreamInsight </strong>- StreamInsight is Microsoft&rsquo;s &ldquo;Complex Event Processing&rdquo; or CEP engine. This product, hooked into SQL Server 2008R2, has multiple ways of interacting with a data flow. You can create adapters to talk with systems, and then examine the data mid-stream and create triggers to do something with it. You can read more about StreamInsight here: <a title="http://msdn.microsoft.com/en-us/library/ee391416(v=sql.110).aspx" href="http://msdn.microsoft.com/en-us/library/ee391416(v=sql.110).aspx">http://msdn.microsoft.com/en-us/library/ee391416(v=sql.110).aspx</a>&nbsp;</p>
<p><strong>BizTalk </strong>- When there is more latency available between the initiation of the data and its processing, you can use Microsoft BizTalk. This is a message-passing and Service Bus oriented tool, and it can also be used to join system&rsquo;s data together than normally does not have a direct link, for instance a Mainframe system to SQL Server. You can learn more about BizTalk here: <a href="http://www.microsoft.com/biztalk/en/us/overview.aspx">http://www.microsoft.com/biztalk/en/us/overview.aspx</a>&nbsp;</p>
<p><strong>.NET and the Windows Azure Service Bus </strong>- Along the same lines as BizTalk but with a more programming-oriented design are the Windows and Windows Azure Service Bus tools. The Service Bus allows you to pass messages as well, and opens up web interactions and even inter-company routing. BizTalk can do this as well, but the Service Bus tools use an API approach for designing the flow and interfaces you want. The Service Bus offerings are also intended as near real-time, not as a streaming interface. You can learn more about the Windows Azure Service Bus here: <a href="http://www.windowsazure.com/en-us/home/tour/service-bus/">http://www.windowsazure.com/en-us/home/tour/service-bus/</a> and more about the Event Processing side here: <a href="http://msdn.microsoft.com/en-us/magazine/dd569756.aspx">http://msdn.microsoft.com/en-us/magazine/dd569756.aspx</a>&nbsp;</p>
<h2>Store-Analyze-Present</h2>
<p>A more traditional approach with an organization&rsquo;s data is to store the data and analyze it out-of-band. This began with simply running code over a data store, but as locking and blocking became an issue on a file system, Relational Database Management Systems (RDBMs) were created. Over time a distinction was made between data used in an online processing system, meant to be highly available for writing data (OLTP) and systems designed for analytical and reporting purposes (OLAP).</p>
<p>Later the data grew larger than these systems were designed for, primarily due to consistency requirements. In analysis, however, consistency isn&rsquo;t always a requirement, and so file-based systems for that analysis were re-introduced from the Mainframe concepts, with new technology layered in for speed and size.</p>
<p>I normally break up the process of analyzing large data sets into four phases:</p>
<ol>
<li><em>Source and Transfer </em>- Obtaining the data at its source and transferring or loading it into the storage; optionally transforming it along the way</li>
<li><em>Store and Process</em> - Data is stored on some sort of persistence, and in some cases an engine handles the acquisition and placement on persistent storage, as well as retrieval through an interface.</li>
<li>&nbsp;<em>Analysis </em>- A new layer introduced with &ldquo;Big Data&rdquo; is a separate analysis step. This is dependent on the engine or storage methodology, is often programming language or script based, and sometimes re-introduces the analysis back into the data. Some engines and processes combine this function into the previous phase.</li>
<li><em>Presentation</em> - In most cases, the data wants a graphical representation to comprehend, especially in a series or trend analysis. In other cases a simple symbolic representation, similar to the &ldquo;dashboard&rdquo; elements in a Business Intelligence suite. Presentation tools may also have an analysis or refinement capability to allow end-users to work with the data sets. As in the Analysis phase, some methodologies bundle in the Analysis and Presentation phases into one toolset.</li>
</ol>
<h3>Source and Transfer</h3>
<p>You&rsquo;ll notice in this area, along with those that follow, Microsoft is adopting not only its own technologies but those within open-source. This is a positive sign, and means that you will have a best-of-breed, supported set of tools to move the data from one location to another. Traditional file-copy, File Transfer Protocol and more are certainly options, but do not normally deal with moving datasets.</p>
<p>I&rsquo;ve already mentioned the ability of a streaming tool to push data into a store-analyze-present model, so I&rsquo;ll follow up that discussion with the tools that can extract data from one source and place it in another.</p>
<p><strong><span style="color:#800000;">SQL Server Integration Services (SSIS)/SQL Server Bulk Copy Program (BCP)</span> </strong>- SSIS is a SQL Server tool used to move data from one location to another, and optionally perform transform or other processes as it does so. You are not limited to working with SQL Server data - in fact, almost any modern source of data from text to various database platforms is available to move to various systems. It is also extremely fast and has a rich development environment. You can learn more about SSIS here: <a href="http://msdn.microsoft.com/en-us/library/ms141026.aspx">http://msdn.microsoft.com/en-us/library/ms141026.aspx</a> BCP is a tool that has been used with SQL Server data since the first releases; it has multiple sources and destinations as well. It is a command-line utility,and has some limited transform capabilities. You can learn more about BCP here: <a href="http://msdn.microsoft.com/en-us/library/ms162802.aspx">http://msdn.microsoft.com/en-us/library/ms162802.aspx</a>&nbsp;</p>
<p><strong><span style="color:#0000ff;"><span style="color:#800000;">Sqoop</span> </span></strong>- Tied to Microsoft&rsquo;s latest announcements with Hadoop on Windows and Windows Azure, Sqoop is a tool that is used to move data between SQL Server 2008R2 (and higher)&nbsp;and Hadoop, quickly and efficiently. You can read more about that in the Readme file here: <a href="http://www.microsoft.com/download/en/details.aspx?id=27584">http://www.microsoft.com/download/en/details.aspx?id=27584</a>&nbsp;</p>
<p><span style="color:#800000;"><strong>Application Programming Interfaces</strong></span> - API&rsquo;s exist in most every major language that can connect to one data source, access data, optionally transforming it and storing it in another system. Most every dialect of&nbsp; the .NET-based languages contain methods to perform this task.</p>
<h3>Store and Process</h3>
<p>Data at rest is normally used for historical analysis. In some cases this analysis is performed near real-time, and in others historical data is analyzed periodically. Systems that handle data at rest range from simple storage to active management engines.</p>
<p><strong><span style="color:#800000;">SQL Server</span></strong> - Microsoft&rsquo;s flagship RDBMS can indeed store massive amounts of complex data. I am familiar with a two systems in excess of 300 Terabytes of federated data, and the <a href="http://pan-starrs.ifa.hawaii.edu/public/" target="_blank">Pan-Starrs</a> project is designed to handle 1+ Petabyte of data. The theoretical limit of SQL Server DataCenter edition is 540 Petabytes. SQL Server is an engine, so the data access and storage is handled in an abstract layer that also handles concurrency for ACID properties. You can learn more about SQL Server here: <a href="http://www.microsoft.com/sqlserver/en/us/product-info/compare.aspx">http://www.microsoft.com/sqlserver/en/us/product-info/compare.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">SQL Azure Federations</span></strong> - SQL Azure is a database service from Microsoft associated with the Windows Azure platform. Database Servers are multi-tenant, but are shared across a &ldquo;fabric&rdquo; that moves active databases for redundancy and performance. Copies of all databases are kept triple-redundant with a consistent commitment model. Databases are (at this writing - check <a href="http://WindowsAzure.com">http://WindowsAzure.com</a> for the latest) capped at a 150 GB size limit per database. However, Microsoft released a &ldquo;Federation&rdquo; technology, allowing you to query a head node and have the data federated out to multiple databases. This improves both size and performance. You can read more about SQL Azure Federations here: <a href="http://social.technet.microsoft.com/wiki/contents/articles/2281.federations-building-scalable-elastic-and-multi-tenant-database-solutions-with-sql-azure.aspx">http://social.technet.microsoft.com/wiki/contents/articles/2281.federations-building-scalable-elastic-and-multi-tenant-database-solutions-with-sql-azure.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Analysis Services</span></strong> - The Business Intelligence engine within SQL Server, called Analysis Services, can also handle extremely large data systems. In addition to traditional BI data store layouts (ROLAP, MOLAP and HOLAP), the latest version of SQL Server introduces the Vertipaq column-storage technology allowing more direct access to data and a different level of compression. You can read more about Analysis Services here: <a href="http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/analysis-services.aspx">http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/analysis-services.aspx</a> and more about Vertipaq here: <a href="http://msdn.microsoft.com/en-us/library/hh212945(v=SQL.110).aspx">http://msdn.microsoft.com/en-us/library/hh212945(v=SQL.110).aspx</a></p>
<p><span style="color:#800000;"><strong>Parallel Data Warehouse </strong></span>- The Parallel Data Warehouse (PDW) offering from Microsoft is largely described by the title. Accessed in multiple ways including using Transact-SQL (the Microsoft dialect of the Structured Query Language), <a href="http://sqlpdw.com/2010/07/what-mpp-means-to-sql-server-parallel-data-warehouse/" target="_blank">This is an MPP appliance</a>&nbsp;scaling in parallel to extremely large datasets. It is a hardware and software offering - you can learn more about it here: <a href="http://www.microsoft.com/sqlserver/en/us/solutions-technologies/data-warehousing/pdw.aspx">http://www.microsoft.com/sqlserver/en/us/solutions-technologies/data-warehousing/pdw.aspx</a></p>
<p><strong><span style="color:#800000;">HPC Server</span></strong> - Microsoft&rsquo;s High-Performance Computing version of Windows Server deals not only with large data sets, but with extremely complicated computing requirements. A scale-out architecture and inter-operation with Linux systems, as well as dozens of applications pre-written to work with this server make this a capable &ldquo;Big Data&rdquo; system. It is a mature offering, with a long track record of success in scientific, financial and other areas of data processing. It is available both on premises and in Windows Azure, and also in a hybrid of both models, allowing you to &ldquo;rent&rdquo; a super-computer when needed. You can read more about it here: <a href="http://www.microsoft.com/hpc/en/us/product/cluster-computing.aspx">http://www.microsoft.com/hpc/en/us/product/cluster-computing.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Hadoop</span></strong> - Pairing up with Hortonworks, Microsoft has released the Hadoop Open-Source system -&nbsp; including HDFS and a Map/Reduce standardized software, Hive and Pig - on Windows and the Windows Azure platform. This is not a customized version; off-the-shelf concepts and queries work well here. You can read more about Hadoop here: <a href="http://hadoop.apache.org/common/docs/current/">http://hadoop.apache.org/common/docs/current/</a> and you can read more about Microsoft&rsquo;s offerings here: <a href="http://hortonworks.com/partners/microsoft/">http://hortonworks.com/partners/microsoft/</a>&nbsp;and here: <a href="http://social.technet.microsoft.com/wiki/contents/articles/6204.hadoop-based-services-for-windows.aspx">http://social.technet.microsoft.com/wiki/contents/articles/6204.hadoop-based-services-for-windows.aspx</a></p>
<p><strong><span style="color:#800000;">Windows and Azure Storage</span></strong> - Although not an engine - other than a triple-redundant, immediately consistent commit - Windows Azure can hold terabytes of information and make it available to everything from the R programming language to the Hadoop offering. Binary storage (Blobs) and Table storage (Key-Value Pair) data can be queried across a distributed environment. You can learn more about Windows Azure storage here: <a href="http://msdn.microsoft.com/en-us/library/windowsazure/gg433040.aspx">http://msdn.microsoft.com/en-us/library/windowsazure/gg433040.aspx</a>&nbsp;</p>
<h3>Analysis</h3>
<p>In a &ldquo;Big Data&rdquo; environment, it&rsquo;s not unusual to have a specialized set of tasks for analyzing and even interpreting the data. This is a new field called &ldquo;data Science&rdquo;, with a requirement not only for computing, but also a heavy emphasis on math.</p>
<p><span style="color:#800000;"><strong>Transact-SQL </strong></span>- T-SQL is the dialect of the Structured Query Language used by Microsoft. It includes not only robust selection, updating and manipulating of data, but also analytical and domain-level interrogation as well. It can be used on SQL Server, PDW and ODBC data sources. You can read more about T-SQL here: <a href="http://msdn.microsoft.com/en-us/library/bb510741.aspx">http://msdn.microsoft.com/en-us/library/bb510741.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Multidimensional Expressions and Data Analysis Expressions</span></strong> - The MDX and DAX languages allow you to query multidimensional data models that do not fit well with typical two-plane query languages. Pivots, aggregations and more are available within these constructs to query and work with data in Analysis Services. You can read more about MDX here: <a href="http://msdn.microsoft.com/en-us/library/ms145506(v=sql.110).aspx">http://msdn.microsoft.com/en-us/library/ms145506(v=sql.110).aspx</a> and more about DAX here: <a href="http://www.microsoft.com/download/en/details.aspx?id=28572">http://www.microsoft.com/download/en/details.aspx?id=28572</a>&nbsp;</p>
<p><strong><span style="color:#800000;">HPC Jobs and Tasks </span></strong>- Work submitted to the Windows HPC Server has a particular job - essentially a reservation request for resources. Within a job you can submit tasks, such as parametric sweeps and more. You can learn more about Jobs and Tasks here: <a href="http://technet.microsoft.com/en-us/library/cc719020(v=ws.10).aspx">http://technet.microsoft.com/en-us/library/cc719020(v=ws.10).aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">HiveQL </span></strong>- HiveQL is the language used to query a Hive object running on Hadoop. You can see a tutorial on that process here: <a href="http://social.technet.microsoft.com/wiki/contents/articles/6628.aspx">http://social.technet.microsoft.com/wiki/contents/articles/6628.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Piglatin </span></strong>- Piglatin is the submission language for the Pig implementation on Hadoop. An example of that process is here: <a href="http://sqlblog.com/b/avkashchauhan/archive/2012/01/10/running-apache-pig-pig-latin-at-apache-hadoop-on-windows-azure.aspx">http://blogs.msdn.com/b/avkashchauhan/archive/2012/01/10/running-apache-pig-pig-latin-at-apache-hadoop-on-windows-azure.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Application Programming Interfaces </span></strong>- Almost all of the analysis offerings have associated API&rsquo;s - of special note is Microsoft Research&rsquo;s Infer.NET, a new language construct for framework for running Bayesian inference in graphical models, as well as probabilistic programming. You can read more about Infer.NET here: <a href="http://research.microsoft.com/en-us/um/cambridge/projects/infernet/">http://research.microsoft.com/en-us/um/cambridge/projects/infernet/</a>&nbsp;</p>
<h3>Presentation</h3>
<p>Lots of tools work in presenting the data once you have done the primary analysis. In fact, there&rsquo;s a great video of a comparison of various tools here: <a href="http://msbiacademy.com/Lesson.aspx?id=73">http://msbiacademy.com/Lesson.aspx?id=73</a> Primarily focused on Business Intelligence. That term itself is now not as completely defined, but the tools I&rsquo;ll show below can be used in multiple ways - not just traditional Business Intelligence scenarios. Application Programming Interfaces (API&rsquo;s) can also be used for presentation; but I&rsquo;ll focus here on &ldquo;out of the box&rdquo; tools.</p>
<p><strong><span style="color:#800000;">Excel</span></strong> - Microsoft&rsquo;s Excel can be used not only for single-desk analysis of data sets, but with larger datasets as well. It has interfaces into SQL Server, Analysis Services, can be connected to the PDW, and is a first-class job submission system for the Windows HPC Server. You can watch a video about Excel and big data here: <a href="http://www.microsoft.com/en-us/showcase/details.aspx?uuid=e20b7482-11c9-4965-b8f0-7fb6ac7a769f">http://www.microsoft.com/en-us/showcase/details.aspx?uuid=e20b7482-11c9-4965-b8f0-7fb6ac7a769f</a>&nbsp;and you can also connect Excel to Hadoop: <a href="http://social.technet.microsoft.com/wiki/contents/articles/how-to-connect-excel-to-hadoop-on-azure-via-hiveodbc.aspx">http://social.technet.microsoft.com/wiki/contents/articles/how-to-connect-excel-to-hadoop-on-azure-via-hiveodbc.aspx</a></p>
<p><strong><span style="color:#800000;">Reporting Services</span></strong> - Reporting Services is a SQL Server tool that can query and show data from multiple sources, all at once. It can also be used with Analysis Services. You can read more about Reporting Services here: <a href="http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/reporting-services.aspx">http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/reporting-services.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Power View</span></strong> - Power View is a &ldquo;Self-Service&rdquo; Business Intelligence reporting tool, which can work with on-premises data in addition to SQL Azure and other data. You can read more about it and see videos of Power View in action here: <a href="http://www.microsoft.com/sqlserver/en/us/future-editions/business-intelligence/SQL-Server-2012-reporting-services.aspx">http://www.microsoft.com/sqlserver/en/us/future-editions/business-intelligence/SQL-Server-2012-reporting-services.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">SharePoint Services -</span></strong> Microsoft has rolled several capable tools in SharePoint as &ldquo;Services&rdquo;. This has the advantage of being able to integrate into the working environment of many companies. You can read more about&nbsp; lots of these reporting and analytic presentation tools here: <a href="http://technet.microsoft.com/en-us/sharepoint/ee692578">http://technet.microsoft.com/en-us/sharepoint/ee692578</a>&nbsp;</p>
<p>This is by no means an exhaustive list - more capabilities are added all the time to Microsoft&rsquo;s products, and things will surely shift and merge as time goes on. Expect today&rsquo;s &ldquo;Big Data&rdquo; to be tomorrow&rsquo;s &ldquo;Laptop Environment&rdquo;.</p>Application Lifecycle Management Overview for Windows Azurehttp://sqlblog.com/blogs/buck_woody/archive/2012/02/07/application-lifecycle-management-overview-for-windows-azure.aspxTue, 07 Feb 2012 14:58:39 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:41593BuckWoody<p>Developing in Windows Azure is at once not that much different from what you’re familiar with in on-premises systems, and different in significant ways. Because of these differences, developers often ask about the specific process to develop and deploy a Windows Azure application - more formally called an Application Lifecycle Management, or ALM. </p> <p>There are specific resources you can use to learn more about various parts of ALM - I’ve referenced those at the end of this post. But ALM has multiple definitions, from the governance of code injection, domain upgrade, testing, process flow and more. Many developers are interested in the finer-grained information, like how do I develop and deploy an application? What tools do I need, and how do I get the code running somewhere that I can test? </p> <p>I’ll cover the very high-level process here, and refer you to specifics at the end of each section, so that you can take it all in at one viewing, and then bookmark for more detail when you need more information. I won’t be covering processes like Continuous Integration or Agile and other methodologies in this post - I’ll blog those later. </p> <h2>Initial Development</h2> <p>You start with writing code. You have three ways to do this. You can use Visual Studio (even the Express Edition Works), Eclipse, or by <a href="https://www.ibm.com/developerworks/webservices/library/ws-restful/" target="_blank">leveraging the REST API format</a>. You can do this in a standalone (non-connected) environment like your laptop. </p> <p align="left">Using Visual Studio is one of the simplest methods to create an Azure application, allowing you to combine the Azure components you want to leverage (Storage, Compute, SQL Azure, the Service Bus, etc.) along with the on-premises code you have now or are creating. Once you’ve installed and patched Visual Studio, just download and install the Windows Azure Software Development Kit (SDK) and you’ll have not only all the API’s you need to talk to Azure, but a fully functioning local environment to run and test your code before you deploy it. You’ll also get a robust set of samples. You can download what you need for all of that (free) here: <a href="http://www.windowsazure.com/en-us/develop/downloads/">http://www.windowsazure.com/en-us/develop/downloads/</a> . There’s a step-by-step process here: <a href="http://msdn.microsoft.com/en-us/magazine/ee336122.aspx"><u><font color="#0066cc">http://msdn.microsoft.com/en-us/magazine/ee336122.aspx</font></u></a> </p> <p>You can also use Eclipse to develop for Windows Azure. You won’t get the full runtime environment in just that kit alone, but you can use this successfully on a Linux system. I have several folks using this method. The downloads and documentation for that is here: <a href="http://www.windowsazure4e.org/"><u><font color="#0066cc">http://www.windowsazure4e.org/</font></u></a> </p> <p>You can use REST API’s to hit Azure Assets and control them. Not my preferred method, but possible. There are REST API’s for various sections of Azure. You can find the main reference for that here: <a href="http://msdn.microsoft.com/en-us/library/windowsazure/ff800682.aspx">http://msdn.microsoft.com/en-us/library/windowsazure/ff800682.aspx</a>&#160;</p> <p><font color="#9bbb59"><font color="#c0504d"><strong><em>Note: </em></strong>We recently demonstrated using a Cloud-based Integrated Development Environment (IDE) for Node.js deployment to Windows Azure. More on that here:</font> </font><a href="http://www.readwriteweb.com/cloud/2012/01/cloud9-ide-to-enable-nodejs-ap.php"><u><font color="#0066cc">http://www.readwriteweb.com/cloud/2012/01/cloud9-ide-to-enable-nodejs-ap.php</font></u></a> </p> <h2>Deploying to a Test Instance</h2> <p>After you write the code, you’ll need to test it somewhere. The Azure Emulator on your development laptop is for a single user on that laptop, and it also has some subtle differences from the production fabric as you might imagine. Normally you’ll set up a small subscription to run and test the application, just like you would have a set of test servers. Each subscription has its own management keys and certificates, so this assists in keeping the testing environment separate for billing and control. </p> <p>More on that general information here: <a href="http://msdn.microsoft.com/en-us/library/ff803362.aspx">http://msdn.microsoft.com/en-us/library/ff803362.aspx</a>&#160;</p> <h2>Deploying to Production</h2> <p>Once you have developed the code and tested it, you need to move it to a location where users can access it. In reality, there is no physical difference in the type of machines, fabric or any other component in “Production” Windows Azure accounts and the “Test” accounts, but you’ll most often pick smaller systems to deploy on in testing, and you’ll probably keep the URL in the plain format.</p> <p>In the Production Windows Azure account, the team normally limits the access to the account for deployment to a separate set of developers. This ensures code flow and control. A DNS name is normally mapped to the longer, Microsoft-generated URL so that your users access the application or data the way you want them to. </p> <p>More on setting up an account here: <a href="http://techinch.com/2010/06/14/setup-your-windows-azure-account/">http://techinch.com/2010/06/14/setup-your-windows-azure-account/</a>&#160;</p> <h2>Managing Code Change</h2> <p>With the application deployed, there are two broad tasks you need to consider. One is managing changes through the application, and the other involves management, monitoring and performance tuning for an application.</p> <p>To make a code change, the standard ALM process is followed, just as above. You can use command-line tools to automate the process as you would with an on-premises system. A vide on that shows you how: <a href="http://www.microsoftpdc.com/2009/SVC25">http://www.microsoftpdc.com/2009/SVC25</a>. Normally this is used with an “In-Place” upgrade into Production Account, since your testing is completed in a separate account. More on that process here: <a href="http://msdn.microsoft.com/en-us/library/windowsazure/ee517255.aspx">http://msdn.microsoft.com/en-us/library/windowsazure/ee517255.aspx</a></p> <p>One difference is the “VIP Swap” process you can use for the final push to Production. In essence, this allows you to have two copies of the application running on the Production account, with a quick way to cut over and back when you’re ready. The process for that is detailed here: <a href="http://msdn.microsoft.com/en-us/library/windowsazure/ee517253.aspx">http://msdn.microsoft.com/en-us/library/windowsazure/ee517253.aspx</a>&#160;</p> <p>For monitoring, you have several options. You should enable the Windows Azure Diagnostics in your code - more on that here: <a href="http://archive.msdn.microsoft.com/WADiagnostics">http://archive.msdn.microsoft.com/WADiagnostics</a>. </p> <p>You can observe uptime and other information on the Windows Azure Service Dashboard, where you can also consume the uptime as an RSS feed: <a href="http://www.windowsazure.com/en-us/support/service-dashboard/">http://www.windowsazure.com/en-us/support/service-dashboard/</a>&#160;</p> <p>From there, you can also use System Center to monitor not only Windows Azure deployments but internal applications as well. The Management Pack and documentation for that is here: <a href="http://www.microsoft.com/download/en/details.aspx?id=11324">http://www.microsoft.com/download/en/details.aspx?id=11324</a>. </p> <p>There are also 3rd-party tools to manage Windows Azure. More on that here: <a href="http://www.bing.com/search?q=monitor+Windows+Azure&amp;form=OSDSRC">http://www.bing.com/search?q=monitor+Windows+Azure&amp;form=OSDSRC</a>&#160;</p> <h3>Other References: </h3> <p>There is a lot more detail in this official reference: <a href="https://www.windowsazure.com/en-us/develop/net/fundamentals/deploying-applications/">https://www.windowsazure.com/en-us/develop/net/fundamentals/deploying-applications/</a>&#160;</p> <p>Bryan Group explains the ramifications of the Secure Development Lifecycle (SDL) with lots of collateral you can review: <a href="http://blogs.msdn.com/b/bryang/archive/2011/04/26/applying-the-sdl-to-windows-azure.aspx">http://blogs.msdn.com/b/bryang/archive/2011/04/26/applying-the-sdl-to-windows-azure.aspx</a></p>Team Foundation Server (TFS) in the Cloud - My Experience So Farhttp://sqlblog.com/blogs/buck_woody/archive/2012/01/24/team-foundation-server-tfs-in-the-cloud-my-experience-so-far.aspxTue, 24 Jan 2012 12:45:13 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:41263BuckWoody<p>I recently joined a software development project that involves not only myself and other internal Microsoft employees, but a partner and a customer as well. We are building a hybrid solution that uses assets on premises as well as Windows Azure for processing. When we put the team together we picked a methodology (Agile) for the project (we use multiple methodologies at Microsoft - whatever the project needs) and then we started talking about Source Control. </p> <p>We’re all comfortable with various tools for check-in-check-out, branching, and so on. We have all used GIT, SVN, and TFS. Some of us have even used Source Safe in past, but that’s another post. <img style="border-bottom-style:none;border-left-style:none;border-top-style:none;border-right-style:none;" class="wlEmoticon wlEmoticon-smile" alt="Smile" src="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-79-79-metablogapi/6661.wlEmoticon_2D00_smile_5F00_2.png" /> Each company has a full set of Source Control systems in place. But using each other’s systems requires logins, firewalls and the like - so we decided to use the <a href="http://tfspreview.com/" target="_blank">TFS Service Preview</a> to run the entire project from “the cloud”. Here are my experiences with that. </p> <p>The process was really simple. In fact, we talked about using the cloud TFS in the first SCRUM, and the team was working from the Work Items list that afternoon. The original account login provides a web interface to allow people to join the team. Each of us happened to have a Live.Com address, so we just invited those addresses to join and they got a link, like this: </p> <p><em>projectname.tfspreview.com</em></p> <p>I’m using Visual Studio, and it’s a requirement for TFS preview to have SP1 installed, and this patch: <span style="font-family:'Calibri','sans-serif';color:#1f497d;font-size:11pt;mso-fareast-font-family:calibri;mso-fareast-theme-font:minor-latin;mso-ansi-language:en-us;mso-fareast-language:en-us;mso-bidi-language:ar-sa;"><a href="http://go.microsoft.com/fwlink/?LinkID=212065" target="_blank"><u><font color="#0000ff">KB2581206</font></u></a></span></p> <p>From there, I opened Visual Studio and navigated from the main menu to Team and then Connect to Team Foundation Server. I’m given this menu: </p> <p><a href="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-79-79-metablogapi/5001.tfs_2D00_2.jpg_5F00_2.png"><img style="background-image:none;border-bottom:0px;border-left:0px;padding-left:0px;padding-right:0px;display:inline;border-top:0px;border-right:0px;padding-top:0px;" title="tfs-2.jpg" border="0" alt="tfs-2.jpg" src="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-79-79-metablogapi/1778.tfs_2D00_2.jpg_5F00_thumb.png" width="244" height="157" /></a></p> <p>Selecting port 443 and HTTPS (for security) and then ensuring the lower link has the “tfs” appended as the location, I opened the project. </p> <p><a href="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-79-79-metablogapi/7167.tfs_2D00_3_5F00_2.jpg"><img style="background-image:none;border-bottom:0px;border-left:0px;padding-left:0px;padding-right:0px;display:inline;border-top:0px;border-right:0px;padding-top:0px;" title="tfs-3" border="0" alt="tfs-3" src="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-79-79-metablogapi/5584.tfs_2D00_3_5F00_thumb.jpg" width="244" height="167" /></a></p> <p><em>(This VSTS screenshot is of a project I did in my University of Washington class I teach - I never show client code or names in a blog post)</em></p> <p>From there it’s a normal set of operations. Right now the preview doesn’t have some things I’d really like, such as an automated build or some of the testing tools, but <a href="http://blogs.msdn.com/b/bharry/archive/2011/09/14/team-foundation-server-on-windows-azure.aspx" target="_blank">you can read this blog entry to learn more about the entire sign-up process, and what the team has planned</a>.</p> <p>Each day I log in to the project, and I’m given this new sign-in option: </p> <p><a href="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-79-79-metablogapi/7635.tfs_2D00_1_5F00_2.jpg"><img style="background-image:none;border-bottom:0px;border-left:0px;padding-left:0px;padding-right:0px;display:inline;border-top:0px;border-right:0px;padding-top:0px;" title="tfs-1" border="0" alt="tfs-1" src="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-79-79-metablogapi/3438.tfs_2D00_1_5F00_thumb.jpg" width="244" height="169" /></a></p> <p>I click the option, and I open the environment, hit My Work Items query, and get to work. All in all, a seamless - although basic - experience. The speed at which we could set up and work on a project was really sweet. It’s remarkable how un-remarkable this is - I just do my work each day, everything is running and backed up in the cloud. I think that’s the point. </p>How Microsoft helps you NOT break your Windows Azure Application: Storage Services Versioninghttp://sqlblog.com/blogs/buck_woody/archive/2011/12/06/how-microsoft-helps-you-not-break-your-windows-azure-application-storage-services-versioning.aspxTue, 06 Dec 2011 14:42:57 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:40161BuckWoody<p><font size="2">One of the advantages of using Windows Azure to run your code is that you don’t have to constantly manage upgrades on your platform. While that’s a big advantage indeed, it immediately brings up the question - how do the upgrades happen? Microsoft upgrades the Azure platform in periodic increments, and the components that are affected are documented. </font></p> <p><font size="2">This brings up another question - upgrades mean change, and change can sometimes alter the way you might implement a feature. What if you have taken a dependency on some feature in your code that has been altered by an upgrade? Windows Azure does have an Application Lifecycle Management (ALM) Process, which I’ll reference at the end of this post. But beyond that, there are some features we’ve put into place that will help you manage many of these changes. One of those is being able to set the version of storage features you would like your code to use. </font></p> <p><font size="2">Windows Azure is made up of three main component areas: Computing, Storage and a group of features called the Application Fabric. You can use these components together or separately, depending on what you would like your application to do. In this post I’ll deal with the version control in the storage subsystem - in other posts I’ll explain how to track and in some cases control the versions of the other components you work with.</font></p> <p><font size="2">When you send a request to a Windows Azure resource, you’re actually using a <a href="http://en.wikipedia.org/wiki/REST" target="_blank">REST</a> call. That’s a three-part call to the system that has a request (called a URI), a header, and a body of code you want to send. So a typical call, such as to a table, might look like this example, which changes the properties of a Blob: </font></p> <p><font size="2"><strong>URI</strong>: <br /><font color="#0000ff">PUT http://myaccount.table.core.windows.net/?restype=service&amp;comp=properties HTTP/1.1</font></font></p> <p><font size="2"><strong>Header</strong>: <br /><font color="#0000ff"><font style="background-color:#ffff00;">x-ms-version: 2011-08-18</font> <br />x-ms-date: Tue, 30 Aug 2011 04:28:19 GMT <br />Authorization: SharedKey <br />myaccount:Z1lTLDwtq5o1UYQluucdsXk6/iB7YxEu0m6VofAEkUE= <br />Host: myaccount.table.core.windows.net</font></font></p> <p><font size="2"><strong>Body</strong>: <br /><font color="#0000ff">&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt; <br />&lt;StorageServiceProperties&gt; <br />&#160;&#160;&#160; &lt;Logging&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;Version&gt;1.0&lt;/Version&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;Delete&gt;true&lt;/Delete&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;Read&gt;false&lt;/Read&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;Write&gt;true&lt;/Write&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;RetentionPolicy&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;Enabled&gt;true&lt;/Enabled&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;Days&gt;7&lt;/Days&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;/RetentionPolicy&gt; <br />&#160;&#160;&#160; &lt;/Logging&gt; <br />&#160;&#160;&#160; &lt;Metrics&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;Version&gt;1.0&lt;/Version&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;Enabled&gt;true&lt;/Enabled&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;IncludeAPIs&gt;false&lt;/IncludeAPIs&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;RetentionPolicy&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;Enabled&gt;true&lt;/Enabled&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;Days&gt;7&lt;/Days&gt; <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; &lt;/RetentionPolicy&gt; <br />&#160;&#160;&#160; &lt;/Metrics&gt; <br />&lt;/StorageServiceProperties&gt; <br /></font></font><font size="2"></font></p> <p><font size="2"><em>(</em><a href="http://msdn.microsoft.com/en-us/library/windowsazure/hh452240.aspx" target="_blank"><em>Source</em></a><em> of this code)</em></font></p> <p><font size="2">You can see that I’ve highlighted a portion of the header block - that’s where you set the version of the Storage Services you would like to use. You can find a list of the <a href="http://msdn.microsoft.com/en-us/library/windowsazure/dd894041.aspx" target="_blank">features introduced in each version here</a>. </font><font size="2">It’s not a requirement of adding that element to the header, but it’s best practices to do so. </font></p> <p><font size="2">You don’t have to use REST calls directly, however. It’s more common to use the API in the Software Development Kit to just change the property in your IDE environment - the setting you’re looking for there is the <a href="http://msdn.microsoft.com/en-us/library/windowsazure/hh343266.aspx">Set Storage Service Properties</a> call. </font></p> <p><font size="2">Interestingly, rather than a breaking change you might run into an unexpected behavior if you are not aware of these parameters. In some code I recently reviewed a newer feature from the storage system failed when it was called. On inspection I found that the developer had used an older codeblock from a previous version of the storage system - he was not aware you can set the version of storage in the call. We changed the header to the latest version, and everything worked as expected. </font></p> <p><font size="2"><strong>References:</strong></font></p> <p><font size="2">The Storage Services Versioning and the changes for each version: </font></p> <p><font size="2"><span style="font-family:'Calibri','sans-serif';font-size:11pt;mso-fareast-font-family:'Times New Roman';mso-bidi-font-family:'Times New Roman';mso-ansi-language:en-us;mso-fareast-language:en-us;mso-bidi-language:ar-sa;"><a href="http://msdn.microsoft.com/en-us/library/windowsazure/dd894041.aspx"><u><font color="#4f81bd" size="2" face="Arial">http://msdn.microsoft.com/en-us/library/windowsazure/dd894041.aspx</font></u></a><font color="#000000" face="Arial"> </font></span></font></p> <p><span style="font-family:'Calibri','sans-serif';font-size:11pt;mso-fareast-font-family:'Times New Roman';mso-bidi-font-family:'Times New Roman';mso-ansi-language:en-us;mso-fareast-language:en-us;mso-bidi-language:ar-sa;"><font color="#000000" size="2" face="Arial">Windows Azure Application Lifecycle Management: </font></span></p> <p><span style="font-family:'Calibri','sans-serif';font-size:11pt;mso-fareast-font-family:'Times New Roman';mso-bidi-font-family:'Times New Roman';mso-ansi-language:en-us;mso-fareast-language:en-us;mso-bidi-language:ar-sa;"><font color="#000000" size="2" face="Arial"><a href="http://msdn.microsoft.com/en-us/library/ff803362.aspx">http://msdn.microsoft.com/en-us/library/ff803362.aspx</a></font></span></p> <p><span style="font-family:'Calibri','sans-serif';font-size:11pt;mso-fareast-font-family:'Times New Roman';mso-bidi-font-family:'Times New Roman';mso-ansi-language:en-us;mso-fareast-language:en-us;mso-bidi-language:ar-sa;"><font color="#000000" size="2" face="Arial"><a href="http://channel9.msdn.com/posts/Windows-Azure-Jump-Start-03-Windows-Azure-Lifecycle-Part-1">http://channel9.msdn.com/posts/Windows-Azure-Jump-Start-03-Windows-Azure-Lifecycle-Part-1</a></font></span></p> <p><span style="font-family:'Calibri','sans-serif';font-size:11pt;mso-fareast-font-family:'Times New Roman';mso-bidi-font-family:'Times New Roman';mso-ansi-language:en-us;mso-fareast-language:en-us;mso-bidi-language:ar-sa;"><a href="http://channel9.msdn.com/Events/TechEd/Australia/Tech-Ed-Australia-2011/COS201">http://channel9.msdn.com/Events/TechEd/Australia/Tech-Ed-Australia-2011/COS201</a>&#160;</span></p> <p><span style="font-family:'Calibri','sans-serif';font-size:11pt;mso-fareast-font-family:'Times New Roman';mso-bidi-font-family:'Times New Roman';mso-ansi-language:en-us;mso-fareast-language:en-us;mso-bidi-language:ar-sa;">&#160;</span></p>The Data Scientisthttp://sqlblog.com/blogs/buck_woody/archive/2011/11/15/the-data-scientist.aspxTue, 15 Nov 2011 15:00:18 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:39814BuckWoody<p>A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I <em>think</em> it means, and why I’m excited about it.</p> <p>In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. </p> <p>The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. </p> <p>I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. </p> <p><strong>Persistence</strong></p> <p>The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? </p> <p>This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization.</p> <p>Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. </p> <p><strong>Processing</strong></p> <p>Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. </p> <p>A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. </p> <p><strong>Presentation</strong></p> <p>This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. </p> <p>This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. </p> <p>And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. </p> <p><strong>Why I’m excited</strong></p> <p>I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. </p> <p>Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. </p> <p>So&#160; watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way. </p>Big Data and the Cloud - More Hype or a Real Workload?http://sqlblog.com/blogs/buck_woody/archive/2011/10/18/big-data-and-the-cloud-more-hype-or-a-real-workload.aspxTue, 18 Oct 2011 13:57:36 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:39156BuckWoody<p>Last week Microsoft announced several new offerings for “Big Data” - and since I’m a stickler for definitions, I wanted to make sure I understood what that really means. What is “Big Data”? What size hard drive is that? After all, my laptop has 1TB of storage - is my laptop “Big Data”?</p> <p>There are actually a few definitions for this term, most notably those involving the <a href="http://nosql.mypopescu.com/post/9621746531/a-definition-of-big-data" target="_blank">“Four V’s” Volume, Velocity, Variety and Variability</a>. Others <a href="http://nosql.mypopescu.com/post/10120087314/big-data-and-the-4-vs-volume-velocity-variety" target="_blank">disagree with this</a> definition. I tend to try and get things into their simplest form, so I’m using this definition for myself:</p> <p align="center"><font color="#c0504d" size="3">Big data is defined as a <em>large set </em>of <em>computationally expensive </em>data that is <em>worked on simultaneously</em>.</font> </p> <p>Let me flesh that out a&#160; little. To be sure, “Big Data” has a larger size than say a few megabytes. The reason this is important is that it takes special hardware to be able to move large sets of data around, store it, process it and so on. (<font color="#c0504d">large set</font>)</p> <p>If you store a LOT of data, but only use a small portion of it at a time, that really isn’t super-hard to do. It’s mainly a storage issue at that point. But, if you do need to work with a large portion of the data at one time, then the memory, CPU and transfer components of the system have to adapt to be responsive - new ways to work with that data (game theory, knot-algorithms, map-reduce, etc.) need to be brought into play. (<font color="#c0504d">computationally expensive</font>)</p> <p>Once that data is loaded into the processing area (memory or whatever other mechanism is used) it must be worked on in parallel to come back in a reasonable time. You have two options here - you can scale the system up with more internal hardware (CPU’s, memory and so on) or you can scale it out to have multiple systems work on it at the same time using paradigms such as map/reduce and so on. Actually, when you lay this out in an architecture diagram, scale up or out doesn’t actually change the logical structure of the process - in scale out the network becomes the bus, and the nodes become more RAM and computing power. Of course, there are changes in code for how you stitch the workload back together. (<font color="#c0504d">worked on simultaneously</font>)</p> <p>So back to the original question. Is Big Data, as I have defined it here, a workload for Windows and SQL Azure? Absolutely! In fact, it’s probably one of the main workloads, and I believe it represents the latest, and perhaps also the earliest frontier of computing. Jim <a href="http://research.microsoft.com/en-us/um/people/gray/" target="_blank">Gray, a former researcher here at Microsoft and a hero of mine, was working on this very topic.</a> I believe as he did - all computing is simply an interface over data. </p> <p>Microsoft has multiple offerings on the topic of Big Data. In posts that follow from myself and my co-workers, we’ll explore when and where you use each one. Whether you are a data professional or a developer, this is the new frontier - <a href="http://www.straightpathsql.com/archives/2011/10/microsoft-loves-your-big-data/" target="_blank">don’t wait to educate yourself</a> on how to leverage Big Data for your organization. </p> <p><strong>Hadoop on Windows Azure and SQL Server&#160; </strong>- Microsoft’s <a href="http://www.hortonworks.com/the-whys-behind-the-microsoft-and-hortonworks-partnership/" target="_blank">partnership to include Hadoop workloads on Windows Azure</a> and <a href="http://www.microsoft.com/download/en/details.aspx?id=27584" target="_blank">SQL Server/Parallel Data Warehouse (PDW)</a></p> <p><strong>LINQ to HPC </strong>- Microsoft’s High-Performance Computing SKU of <a href="http://blogs.technet.com/b/windowshpc/archive/2011/05/20/dryad-becomes-linq-to-hpc.aspx" target="_blank">HPC is now in Azure</a></p> <p><strong>Windows Azure Table Storage </strong>- A <a href="http://msdn.microsoft.com/en-us/library/windowsazure/hh508997.aspx" target="_blank">key/value pair type storage with full partitioning</a> that is immediately consistent, able to handle huge loads of data and works with any REST-compatible language</p> <p>&#160;<strong>Other offerings </strong>- Including the new <a href="http://www.microsoft.com/en-us/sqlazurelabs/default.aspx" target="_blank">Data Explorer</a>, <a href="http://research.microsoft.com/en-us/news/headlines/daytona-071811.aspx" target="_blank">Project Daytona (with a Big Data Toolkit for Scientists and researchers)</a>, <a href="http://www.microsoft.com/sqlserver/en/us/future-editions/SQL-Server-2012-breakthrough-insight.aspx" target="_blank">Power View</a> and more. </p> <p>The era of Big Data is here. And you can use Windows and SQL Azure to bring it to your organization. </p>Creating a Distributed Computing System Using a Windows Azure Queuehttp://sqlblog.com/blogs/buck_woody/archive/2011/10/11/creating-a-distributed-computing-system-using-a-windows-azure-queue.aspxTue, 11 Oct 2011 13:12:42 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:38990BuckWoody<p style="margin:0in 0in 0pt;" class="MsoNormal"><font size="3"><font color="#000000"><font face="Calibri">The Windows Azure Queue component, like all Windows Azure components (Roles, Storage, App Fabric, SQL Azure) can be used by itself or with other Windows Azure components. That’s why I refer to Windows Azure as “Distributed Computing” rather than “cloud”. </font></font></font></p> <p style="margin:0in 0in 0pt;" class="MsoNormal"><font color="#000000" size="3" face="Calibri"></font></p> <p style="margin:0in 0in 0pt;" class="MsoNormal"><font size="3"><font color="#000000"><font face="Calibri">Having a distributed off premise queue has a lot of use-cases. An interesting use-case is a company that wanted to harness the power of all of the PC’s and laptops in the company when they were not being used throughout the day. A developer wrote a screen-saver program that connected to an Azure Queue, pulling work off of the queue and placing an entry when it was done. In essence he had a partially connected distributed work relay system, and since he used a Windows Azure Queue, the system worked from anywhere in the world. </font></font></font></p> <p style="margin:0in 0in 0pt;" class="MsoNormal"> <p><font color="#000000" size="3" face="Calibri">&#160;</font></p> </p> <p style="margin:0in 0in 0pt;" class="MsoNormal"><font size="3"><font color="#000000"><font face="Calibri">He uses an on-site central server (which was actually only a workstation-level system) that holds the computations in a scatter/gather paradigm. The computations are broken into less-than-8K chunks, so that it fits within a message. The server connects to a Windows Azure Queue, and places the message marked for computation. It also scrubs the Queue for completed work, and as part of the process puts that kind of message into a mapping function (queues are not guaranteed a message order). </font></font></font></p> <p style="margin:0in 0in 0pt;" class="MsoNormal"> <p><font color="#000000" size="3" face="Calibri">&#160;</font></p> </p> <p style="margin:0in 0in 0pt;" class="MsoNormal"><font size="3"><font color="#000000"><font face="Calibri">The workstations that are not being used (even those systems at remote workers and travelers) connect to the same Windows Azure Queue when the system is not being used for a period of time, when the screen saver kicks in. It then takes one message from the queue, computes the information, and then sets a new message for the server to pick up with the answer. The workstation then deletes the message. </font></font></font></p> <p style="margin:0in 0in 0pt;" class="MsoNormal"> <p><font color="#000000" size="3" face="Calibri">&#160;</font></p> </p> <p style="margin:0in 0in 0pt;" class="MsoNormal"><font size="3"><font color="#000000"><font face="Calibri">The Server picks up the completed work, processes it and then deletes that queue message. He also added logic to process messages for computation on the server as well, when the server function of adding work is not required. </font></font></font></p> <p style="margin:0in 0in 0pt;" class="MsoNormal"> <p><font color="#000000" size="3" face="Calibri">&#160;</font></p> </p> <p style="margin:0in 0in 0pt;" class="MsoNormal"> <p><font color="#000000" size="3" face="Calibri"><a href="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-79-79-metablogapi/2438.AzureQueueDistributedSystem_5F00_2.png"><img style="background-image:none;border-bottom:0px;border-left:0px;padding-left:0px;padding-right:0px;display:inline;border-top:0px;border-right:0px;padding-top:0px;" title="AzureQueueDistributedSystem" border="0" alt="AzureQueueDistributedSystem" src="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-79-79-metablogapi/3603.AzureQueueDistributedSystem_5F00_thumb.png" width="708" height="919" /></a></font></p> </p> <p style="margin:0in 0in 0pt;" class="MsoNormal"> <p>&#160;</p> </p> <p style="margin:0in 0in 0pt;" class="MsoNormal"><font size="3"><font color="#000000"><font face="Calibri">There are a few caveats here. This works because of the mapping function on the head server. Order is not guaranteed, so he includes a number for the function step as part of the message body, which cuts the size a bit. Also, he’s careful to watch the encoding, since Azure will hand binary back in Base64 format. </font></font></font></p> <p style="margin:0in 0in 0pt;" class="MsoNormal"> <p><font color="#000000" size="3" face="Calibri">&#160;</font></p> </p> <p style="margin:0in 0in 0pt;" class="MsoNormal"><font size="3"><font color="#000000"><font face="Calibri">He’s found that there are enough systems to ensure that the messages are cleared every few days – important, since the Windows Azure Queue ages out after seven days. Also, he’s careful to use the CloudQueue.PeekMessage function when he wants to monitor the system – that function ensures that the message status doesn’t reset as “read” when he accesses it. </font></font></font></p> <p style="margin:0in 0in 0pt;" class="MsoNormal"> <p><font color="#000000" size="3" face="Calibri">&#160;</font></p> </p> <p style="margin:0in 0in 0pt;" class="MsoNormal"><font size="3"><font color="#000000"><font face="Calibri">This is a great example of using the “cloud” as what it is intended to be – a distributed architecture you can use as needed to solve a business problem. It’s not an “all or nothing” proposition, but instead it is simply another set of components to use where you need them. </font></font></font></p>Rip and Replace or Extend and Embrace?http://sqlblog.com/blogs/buck_woody/archive/2011/09/13/rip-and-replace-or-extend-and-embrace.aspxTue, 13 Sep 2011 11:20:05 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:38437BuckWoody<p>As most of you know, I don&rsquo;t like the term &ldquo;cloud&rdquo; very<br />much. It isn&rsquo;t defined, which means it can be anything. I prefer &ldquo;distributed<br />computing&rdquo;, which is more technically accurate and describes what you&rsquo;re doing<br />in more concrete terms.</p>
<p>So when you think about Windows and SQL Azure, you don&rsquo;t<br />have to think about an entire product &ndash; you can use parts of the system<br />together or independently to accomplish what you need to do. You can use the<br />computing functions, storage, and more and more I see folks leverage the<br />Service Bus to enable current applications to expose things to the web.</p>
<p>And that brings up the point of this post. Once you decide<br />that a distributed architecture works to solve a problem, you&rsquo;re faced with a<br />decision: should you completely re-write your architecture to take advantage of<br />the current systems or should you just fold in new code that makes the data or<br />function available to the web?</p>
<p>Of course, the answer is always &ldquo;it depends&rdquo; on the situation<br />&ndash; and it does. But unless you&rsquo;re fixing a problem with current code, I usually<br />advocate a migration approach. That means at the very least retaining the<br />business logic (again, unless it&rsquo;s not currently working) and as much of the<br />code as you can. In fact, if you follow this paradigm, you&rsquo;re on your way to<br />making a Service Bus out of the functions you currently have. You can expose<br />the results of a system rather than opening the system up. Let&rsquo;s take an<br />example.</p>
<p>Assume for a moment that you have an order-taking system<br />on-premise. That system performs many functions, one of which might creating a<br />Purchase Order. Your system might be enclosed, meaning that it has an<br />application that talks to a middle-tier, and then from there to a database<br />system. A query is generated from a screen, and passed along to eventually<br />compute, store and return a Purchase Order Number, along with other<br />information. Imagine now that you wire up the code not only to return the PO<br />number to the client, but to make that number available on an endpoint &ndash;<br />actually really not that hard to do.</p>
<p>Now you can make that PO number available to the web using<br />Azure. You could restrict who can make that call to the system, or open it up<br />to a broader audience. Or instead of the PO Number, you could make a product<br />list available. And you can go further than that &ndash; EBay, for instance, uses the<br />OData protocol (which is very cool in and of itself) which you can query from<br />the web. You could compare your company&rsquo;s product catalog to what is on EBay,<br />and list the items you have there if there are no competitors in that space.<br />And on and on it goes.</p>
<p>So the point is this &ndash; where you can, retain what works.<br />Fold in systems like Azure where they make sense. Extend and Embrace.</p>Plan for Diagnostics in Cloud Computing From the Git-Gohttp://sqlblog.com/blogs/buck_woody/archive/2011/09/06/plan-for-diagnostics-in-cloud-computing-from-the-git-go.aspxTue, 06 Sep 2011 13:11:22 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:38295BuckWoody<p>“Git-Go” is something we say in the South that means “right at the start”. I’ve seen several applications for on-premise systems that don’t have much in the way of diagnostics - the developers rely on a debugger, the event logs on the server and client workstation, and most of all, the ability to watch the system from end-to-end. </p> <p>This approach is a mistake for an on-premise system, and it’s definitely a problem for a distributed architecture. You simply do not own all of the components from end to end in a cloud environment, nor are you always able to attach a debugger or other remote monitoring tools to the various areas within the code path. So you need to make sure that from the very outset of your design that you build in diagnostics. My personal preference is to build a system such that a control file turns on deeper information gathering from the system, up to a minimal level.</p> <p>When I do that, I set a high level of logging, a medium level, and a moderate level. I normally use the deepest level of information during the testing and acceptance phase of the deployment, then switch to moderate and then the least level of information gathering. Also in my design I often set an error condition to begin gathering the deeper information along with the exception, where possible.</p> <p>There are decisions you need to make as to where to store the diagnostics (many operations in the cloud cost money), how often you collect them, and so on. You can get a quick overview on using the diagnostics that come with Windows Azure here: <a href="http://www.azuresupport.com/2010/03/getting-started-with-windows-azure-diagnostics-and-monitoring/">http://www.azuresupport.com/2010/03/getting-started-with-windows-azure-diagnostics-and-monitoring/</a> This is where you should start first. More detail on that: <a href="http://msdn.microsoft.com/en-us/library/gg433048.aspx">http://msdn.microsoft.com/en-us/library/gg433048.aspx</a></p> <p>My friend Dave Pallman has a great tool he’s released for free: <a href="http://davidpallmann.blogspot.com/2009/03/azure-application-monitor-now-on.html">http://davidpallmann.blogspot.com/2009/03/azure-application-monitor-now-on.html</a></p> <p>If the issue is in storage apps: <a href="http://social.msdn.microsoft.com/Forums/en-US/windowsazuredata/thread/d84ba34b-b0e0-4961-a167-bbe7618beb83">http://social.msdn.microsoft.com/Forums/en-US/windowsazuredata/thread/d84ba34b-b0e0-4961-a167-bbe7618beb83</a></p> <p>If you have System Center, this is the quickest and easiest way to implement the monitoring – really handy: <a href="http://pinpoint.microsoft.com/en-us/applications/windows-azure-application-monitoring-management-pack-release-candidate-12884907699">http://pinpoint.microsoft.com/en-us/applications/windows-azure-application-monitoring-management-pack-release-candidate-12884907699</a></p>