Buck Woody : Data Professional, SQL Azurehttp://sqlblog.com/blogs/buck_woody/archive/tags/Data+Professional/SQL+Azure/default.aspxTags: Data Professional, SQL AzureenCommunityServer 2.1 SP2 (Build: 61129.1)In the Cloud, Everything Costs Moneyhttp://sqlblog.com/blogs/buck_woody/archive/2012/07/10/in-the-cloud-everything-costs-money.aspxTue, 10 Jul 2012 12:55:50 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:44239BuckWoody0http://sqlblog.com/blogs/buck_woody/comments/44239.aspxhttp://sqlblog.com/blogs/buck_woody/commentrss.aspx?PostID=44239<p>I’ve been teaching my daughter about budgeting. I’ve explained that most of the time the money coming in is from only one or two sources – and you can only change that from time to time. The money going out, however, is to many locations, and it changes all the time. She’s made a simple debits and credits spreadsheet, and I’m having her research each part of the budget. Her eyes grow wide when she finds out everything has a cost – the house, gas for the lawnmower, dishes, water for showers, food, electricity to run the fridge, a new fridge when that one breaks, everything has a cost. She asked me “how do you pay for all this?” It’s a sentiment many adults have looking at their own budgets – and one reason that some folks don’t even make a budget. It’s hard to face up to the realities of how much it costs to do what we want to do. </p> <p>When we design a computing solution, it’s interesting to set up a similar budget, because we don’t always consider all of the costs associated with it. I’ve seen design sessions where the new software or servers are considered, but the “sunk” costs of personnel, networking, maintenance, increased storage, new sizes for backups and offsite storage and so on are not added in. They are already on premises, so they are assumed to be paid for already.</p> <p>When you move to a distributed architecture, you'll see more costs directly reflected. Store something, pay for that storage. If the system is deployed and no one is using it, you’re still paying for it. As you watch those costs rise, you might be tempted to think that a distributed architecture costs more than an on-premises one. </p> <p>And you might be right – for some solutions. I’ve worked with a few clients where moving to a distributed architecture doesn’t make financial sense – so we didn’t implement it. I still designed the system in a distributed fashion, however, so that when it does make sense there isn’t much re-architecting to do. </p> <p>In other cases, however, if you consider all of the on-premises costs and compare those accurately to operating a system in the cloud, the distributed system is much cheaper. Again, I never recommend that you take a “here-or-there-only” mentality – I think a hybrid distributed system is usually best – but each solution is different. There simply is no “one size fits all” to architecting a solution.</p> <p>As you design your solution, cost out each element. You might find that using a hybrid approach saves you money in one design and not in another. It’s a brave new world indeed. </p> <p>So yes, in the cloud, everything costs money. But an on-premises solution also costs money – it’s just that “dad” (the company) is paying for it and we don’t always see it. When we go out on our own in the cloud, we need to ensure that we consider all of the costs. </p><img src="http://sqlblog.com/aggbug.aspx?PostID=44239" width="1" height="1">AzureBusiness EnablementCloudCloud ComputingConceptsData ProfessionalDesignPlanningProcessSQL AzureTipsWebBig Data - A Microsoft Tools Approachhttp://sqlblog.com/blogs/buck_woody/archive/2012/02/20/big-data-a-microsoft-tools-approach.aspxMon, 20 Feb 2012 21:16:00 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:41832BuckWoody1http://sqlblog.com/blogs/buck_woody/comments/41832.aspxhttp://sqlblog.com/blogs/buck_woody/commentrss.aspx?PostID=41832<p><em><span style="color:#c0504d;">(As with all of these types of posts, check the date of the latest update I&rsquo;ve made here. Anything older than 6 months is probably out of date, given the speed with which we release new features into Windows and SQL Azure)</span></em></p>
<p>I don&rsquo;t normally like to discuss things in terms of tools. I find that whenever you start with a given tool (or even a tool stack) it&rsquo;s too easy to fit the problem to the tool(s), rather than the other way around as it should be.</p>
<p>That being said, it&rsquo;s often useful to have an example to work through to better understand a concept. But like many ideas in Computer Science, &ldquo;Big Data&rdquo; is too broad a term in use to show a single example that brings out the multiple processes, use-cases and patterns you can use it for.</p>
<p>So we turn to a description of the tools you can use to analyze large data sets. &ldquo;Big Data&rdquo; is a term used lately to describe data sets that have the &ldquo;<a href="http://radar.oreilly.com/2012/01/what-is-big-data.html" target="_blank">Four V&rsquo;s</a>&rdquo;&nbsp; as a characteristic, but I have a simpler definition I like to use:</p>
<p align="center"><em><span style="color:#0000ff;font-size:small;">Big Data involves a data set too large to process in a reasonable period of time</span></em></p>
<p>I realize that&rsquo;s a bit broad, but in my mind it answers the question and is fairly future-proof. The general idea is that you want to analyze some data, and using whatever current methods, storage, compute and so on that you have at hand it doesn&rsquo;t allow you to finish processing it in a time period that you are comfortable with. I&rsquo;ll explain some new tools you can use for this processing.</p>
<p>Yes, this post is Microsoft-centric. There are probably posts from other vendors and open-source that cover this process in the way they best see fit. And of course you can always &ldquo;mix and match&rdquo;, meaning using Microsoft for one or more parts of the process and other vendors or open-source for another. I never advise that you use any one vendor blindly - educate yourself, examine the facts, perform some tests and choose whatever mix of technologies best solves your problem.</p>
<p>At the risk of being vendor-specific, and probably incomplete, I use the following short list of tools Microsoft has for working with &ldquo;Big Data&rdquo;. There is no single package that performs all phases of analysis. These tools are what I use; they should not be taken as a Microsoft authoritative testament to the toolset we&rsquo;ll finalize for a given problem-space. In fact, that&rsquo;s the key: find the problem and then fit the tools to that.</p>
<h2>Process Types</h2>
<p>I break up the analysis of the data into two process types. The first is examining and processing the data <em>in-line</em>, meaning as the data passes through some process. The second is a <em>store-analyze-present</em> process.</p>
<h2>Processing Data In-Line</h2>
<p>Processing data in-line means that the data doesn&rsquo;t have a destination - it remains in the source system. But as it moves from an input or is routed to storage within the source system, various methods are available to examine the data as it passes, and either trigger some action or create some analysis.</p>
<p>You might not think of this as &ldquo;Big Data&rdquo;, but in fact it can be. Organizations have huge amounts of data stored in multiple systems. Many times the data from these systems do not end up in a database for evaluation. There are options, however, to evaluate that data real-time and either act on the data or perhaps copy or stream it to another process for evaluation.</p>
<p>The advantage of an in-stream data analysis is that you don&rsquo;t necessarily have to store the data again to work with it. That&rsquo;s also a disadvantage - depending on how you architect the solution, you might not retain a historical record. One method of dealing with this requirement is to trigger a rollup collection or a more detailed collection based on the event.</p>
<p><strong>StreamInsight </strong>- StreamInsight is Microsoft&rsquo;s &ldquo;Complex Event Processing&rdquo; or CEP engine. This product, hooked into SQL Server 2008R2, has multiple ways of interacting with a data flow. You can create adapters to talk with systems, and then examine the data mid-stream and create triggers to do something with it. You can read more about StreamInsight here: <a title="http://msdn.microsoft.com/en-us/library/ee391416(v=sql.110).aspx" href="http://msdn.microsoft.com/en-us/library/ee391416(v=sql.110).aspx">http://msdn.microsoft.com/en-us/library/ee391416(v=sql.110).aspx</a>&nbsp;</p>
<p><strong>BizTalk </strong>- When there is more latency available between the initiation of the data and its processing, you can use Microsoft BizTalk. This is a message-passing and Service Bus oriented tool, and it can also be used to join system&rsquo;s data together than normally does not have a direct link, for instance a Mainframe system to SQL Server. You can learn more about BizTalk here: <a href="http://www.microsoft.com/biztalk/en/us/overview.aspx">http://www.microsoft.com/biztalk/en/us/overview.aspx</a>&nbsp;</p>
<p><strong>.NET and the Windows Azure Service Bus </strong>- Along the same lines as BizTalk but with a more programming-oriented design are the Windows and Windows Azure Service Bus tools. The Service Bus allows you to pass messages as well, and opens up web interactions and even inter-company routing. BizTalk can do this as well, but the Service Bus tools use an API approach for designing the flow and interfaces you want. The Service Bus offerings are also intended as near real-time, not as a streaming interface. You can learn more about the Windows Azure Service Bus here: <a href="http://www.windowsazure.com/en-us/home/tour/service-bus/">http://www.windowsazure.com/en-us/home/tour/service-bus/</a> and more about the Event Processing side here: <a href="http://msdn.microsoft.com/en-us/magazine/dd569756.aspx">http://msdn.microsoft.com/en-us/magazine/dd569756.aspx</a>&nbsp;</p>
<h2>Store-Analyze-Present</h2>
<p>A more traditional approach with an organization&rsquo;s data is to store the data and analyze it out-of-band. This began with simply running code over a data store, but as locking and blocking became an issue on a file system, Relational Database Management Systems (RDBMs) were created. Over time a distinction was made between data used in an online processing system, meant to be highly available for writing data (OLTP) and systems designed for analytical and reporting purposes (OLAP).</p>
<p>Later the data grew larger than these systems were designed for, primarily due to consistency requirements. In analysis, however, consistency isn&rsquo;t always a requirement, and so file-based systems for that analysis were re-introduced from the Mainframe concepts, with new technology layered in for speed and size.</p>
<p>I normally break up the process of analyzing large data sets into four phases:</p>
<ol>
<li><em>Source and Transfer </em>- Obtaining the data at its source and transferring or loading it into the storage; optionally transforming it along the way</li>
<li><em>Store and Process</em> - Data is stored on some sort of persistence, and in some cases an engine handles the acquisition and placement on persistent storage, as well as retrieval through an interface.</li>
<li>&nbsp;<em>Analysis </em>- A new layer introduced with &ldquo;Big Data&rdquo; is a separate analysis step. This is dependent on the engine or storage methodology, is often programming language or script based, and sometimes re-introduces the analysis back into the data. Some engines and processes combine this function into the previous phase.</li>
<li><em>Presentation</em> - In most cases, the data wants a graphical representation to comprehend, especially in a series or trend analysis. In other cases a simple symbolic representation, similar to the &ldquo;dashboard&rdquo; elements in a Business Intelligence suite. Presentation tools may also have an analysis or refinement capability to allow end-users to work with the data sets. As in the Analysis phase, some methodologies bundle in the Analysis and Presentation phases into one toolset.</li>
</ol>
<h3>Source and Transfer</h3>
<p>You&rsquo;ll notice in this area, along with those that follow, Microsoft is adopting not only its own technologies but those within open-source. This is a positive sign, and means that you will have a best-of-breed, supported set of tools to move the data from one location to another. Traditional file-copy, File Transfer Protocol and more are certainly options, but do not normally deal with moving datasets.</p>
<p>I&rsquo;ve already mentioned the ability of a streaming tool to push data into a store-analyze-present model, so I&rsquo;ll follow up that discussion with the tools that can extract data from one source and place it in another.</p>
<p><strong><span style="color:#800000;">SQL Server Integration Services (SSIS)/SQL Server Bulk Copy Program (BCP)</span> </strong>- SSIS is a SQL Server tool used to move data from one location to another, and optionally perform transform or other processes as it does so. You are not limited to working with SQL Server data - in fact, almost any modern source of data from text to various database platforms is available to move to various systems. It is also extremely fast and has a rich development environment. You can learn more about SSIS here: <a href="http://msdn.microsoft.com/en-us/library/ms141026.aspx">http://msdn.microsoft.com/en-us/library/ms141026.aspx</a> BCP is a tool that has been used with SQL Server data since the first releases; it has multiple sources and destinations as well. It is a command-line utility,and has some limited transform capabilities. You can learn more about BCP here: <a href="http://msdn.microsoft.com/en-us/library/ms162802.aspx">http://msdn.microsoft.com/en-us/library/ms162802.aspx</a>&nbsp;</p>
<p><strong><span style="color:#0000ff;"><span style="color:#800000;">Sqoop</span> </span></strong>- Tied to Microsoft&rsquo;s latest announcements with Hadoop on Windows and Windows Azure, Sqoop is a tool that is used to move data between SQL Server 2008R2 (and higher)&nbsp;and Hadoop, quickly and efficiently. You can read more about that in the Readme file here: <a href="http://www.microsoft.com/download/en/details.aspx?id=27584">http://www.microsoft.com/download/en/details.aspx?id=27584</a>&nbsp;</p>
<p><span style="color:#800000;"><strong>Application Programming Interfaces</strong></span> - API&rsquo;s exist in most every major language that can connect to one data source, access data, optionally transforming it and storing it in another system. Most every dialect of&nbsp; the .NET-based languages contain methods to perform this task.</p>
<h3>Store and Process</h3>
<p>Data at rest is normally used for historical analysis. In some cases this analysis is performed near real-time, and in others historical data is analyzed periodically. Systems that handle data at rest range from simple storage to active management engines.</p>
<p><strong><span style="color:#800000;">SQL Server</span></strong> - Microsoft&rsquo;s flagship RDBMS can indeed store massive amounts of complex data. I am familiar with a two systems in excess of 300 Terabytes of federated data, and the <a href="http://pan-starrs.ifa.hawaii.edu/public/" target="_blank">Pan-Starrs</a> project is designed to handle 1+ Petabyte of data. The theoretical limit of SQL Server DataCenter edition is 540 Petabytes. SQL Server is an engine, so the data access and storage is handled in an abstract layer that also handles concurrency for ACID properties. You can learn more about SQL Server here: <a href="http://www.microsoft.com/sqlserver/en/us/product-info/compare.aspx">http://www.microsoft.com/sqlserver/en/us/product-info/compare.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">SQL Azure Federations</span></strong> - SQL Azure is a database service from Microsoft associated with the Windows Azure platform. Database Servers are multi-tenant, but are shared across a &ldquo;fabric&rdquo; that moves active databases for redundancy and performance. Copies of all databases are kept triple-redundant with a consistent commitment model. Databases are (at this writing - check <a href="http://WindowsAzure.com">http://WindowsAzure.com</a> for the latest) capped at a 150 GB size limit per database. However, Microsoft released a &ldquo;Federation&rdquo; technology, allowing you to query a head node and have the data federated out to multiple databases. This improves both size and performance. You can read more about SQL Azure Federations here: <a href="http://social.technet.microsoft.com/wiki/contents/articles/2281.federations-building-scalable-elastic-and-multi-tenant-database-solutions-with-sql-azure.aspx">http://social.technet.microsoft.com/wiki/contents/articles/2281.federations-building-scalable-elastic-and-multi-tenant-database-solutions-with-sql-azure.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Analysis Services</span></strong> - The Business Intelligence engine within SQL Server, called Analysis Services, can also handle extremely large data systems. In addition to traditional BI data store layouts (ROLAP, MOLAP and HOLAP), the latest version of SQL Server introduces the Vertipaq column-storage technology allowing more direct access to data and a different level of compression. You can read more about Analysis Services here: <a href="http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/analysis-services.aspx">http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/analysis-services.aspx</a> and more about Vertipaq here: <a href="http://msdn.microsoft.com/en-us/library/hh212945(v=SQL.110).aspx">http://msdn.microsoft.com/en-us/library/hh212945(v=SQL.110).aspx</a></p>
<p><span style="color:#800000;"><strong>Parallel Data Warehouse </strong></span>- The Parallel Data Warehouse (PDW) offering from Microsoft is largely described by the title. Accessed in multiple ways including using Transact-SQL (the Microsoft dialect of the Structured Query Language), <a href="http://sqlpdw.com/2010/07/what-mpp-means-to-sql-server-parallel-data-warehouse/" target="_blank">This is an MPP appliance</a>&nbsp;scaling in parallel to extremely large datasets. It is a hardware and software offering - you can learn more about it here: <a href="http://www.microsoft.com/sqlserver/en/us/solutions-technologies/data-warehousing/pdw.aspx">http://www.microsoft.com/sqlserver/en/us/solutions-technologies/data-warehousing/pdw.aspx</a></p>
<p><strong><span style="color:#800000;">HPC Server</span></strong> - Microsoft&rsquo;s High-Performance Computing version of Windows Server deals not only with large data sets, but with extremely complicated computing requirements. A scale-out architecture and inter-operation with Linux systems, as well as dozens of applications pre-written to work with this server make this a capable &ldquo;Big Data&rdquo; system. It is a mature offering, with a long track record of success in scientific, financial and other areas of data processing. It is available both on premises and in Windows Azure, and also in a hybrid of both models, allowing you to &ldquo;rent&rdquo; a super-computer when needed. You can read more about it here: <a href="http://www.microsoft.com/hpc/en/us/product/cluster-computing.aspx">http://www.microsoft.com/hpc/en/us/product/cluster-computing.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Hadoop</span></strong> - Pairing up with Hortonworks, Microsoft has released the Hadoop Open-Source system -&nbsp; including HDFS and a Map/Reduce standardized software, Hive and Pig - on Windows and the Windows Azure platform. This is not a customized version; off-the-shelf concepts and queries work well here. You can read more about Hadoop here: <a href="http://hadoop.apache.org/common/docs/current/">http://hadoop.apache.org/common/docs/current/</a> and you can read more about Microsoft&rsquo;s offerings here: <a href="http://hortonworks.com/partners/microsoft/">http://hortonworks.com/partners/microsoft/</a>&nbsp;and here: <a href="http://social.technet.microsoft.com/wiki/contents/articles/6204.hadoop-based-services-for-windows.aspx">http://social.technet.microsoft.com/wiki/contents/articles/6204.hadoop-based-services-for-windows.aspx</a></p>
<p><strong><span style="color:#800000;">Windows and Azure Storage</span></strong> - Although not an engine - other than a triple-redundant, immediately consistent commit - Windows Azure can hold terabytes of information and make it available to everything from the R programming language to the Hadoop offering. Binary storage (Blobs) and Table storage (Key-Value Pair) data can be queried across a distributed environment. You can learn more about Windows Azure storage here: <a href="http://msdn.microsoft.com/en-us/library/windowsazure/gg433040.aspx">http://msdn.microsoft.com/en-us/library/windowsazure/gg433040.aspx</a>&nbsp;</p>
<h3>Analysis</h3>
<p>In a &ldquo;Big Data&rdquo; environment, it&rsquo;s not unusual to have a specialized set of tasks for analyzing and even interpreting the data. This is a new field called &ldquo;data Science&rdquo;, with a requirement not only for computing, but also a heavy emphasis on math.</p>
<p><span style="color:#800000;"><strong>Transact-SQL </strong></span>- T-SQL is the dialect of the Structured Query Language used by Microsoft. It includes not only robust selection, updating and manipulating of data, but also analytical and domain-level interrogation as well. It can be used on SQL Server, PDW and ODBC data sources. You can read more about T-SQL here: <a href="http://msdn.microsoft.com/en-us/library/bb510741.aspx">http://msdn.microsoft.com/en-us/library/bb510741.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Multidimensional Expressions and Data Analysis Expressions</span></strong> - The MDX and DAX languages allow you to query multidimensional data models that do not fit well with typical two-plane query languages. Pivots, aggregations and more are available within these constructs to query and work with data in Analysis Services. You can read more about MDX here: <a href="http://msdn.microsoft.com/en-us/library/ms145506(v=sql.110).aspx">http://msdn.microsoft.com/en-us/library/ms145506(v=sql.110).aspx</a> and more about DAX here: <a href="http://www.microsoft.com/download/en/details.aspx?id=28572">http://www.microsoft.com/download/en/details.aspx?id=28572</a>&nbsp;</p>
<p><strong><span style="color:#800000;">HPC Jobs and Tasks </span></strong>- Work submitted to the Windows HPC Server has a particular job - essentially a reservation request for resources. Within a job you can submit tasks, such as parametric sweeps and more. You can learn more about Jobs and Tasks here: <a href="http://technet.microsoft.com/en-us/library/cc719020(v=ws.10).aspx">http://technet.microsoft.com/en-us/library/cc719020(v=ws.10).aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">HiveQL </span></strong>- HiveQL is the language used to query a Hive object running on Hadoop. You can see a tutorial on that process here: <a href="http://social.technet.microsoft.com/wiki/contents/articles/6628.aspx">http://social.technet.microsoft.com/wiki/contents/articles/6628.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Piglatin </span></strong>- Piglatin is the submission language for the Pig implementation on Hadoop. An example of that process is here: <a href="http://sqlblog.com/b/avkashchauhan/archive/2012/01/10/running-apache-pig-pig-latin-at-apache-hadoop-on-windows-azure.aspx">http://blogs.msdn.com/b/avkashchauhan/archive/2012/01/10/running-apache-pig-pig-latin-at-apache-hadoop-on-windows-azure.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Application Programming Interfaces </span></strong>- Almost all of the analysis offerings have associated API&rsquo;s - of special note is Microsoft Research&rsquo;s Infer.NET, a new language construct for framework for running Bayesian inference in graphical models, as well as probabilistic programming. You can read more about Infer.NET here: <a href="http://research.microsoft.com/en-us/um/cambridge/projects/infernet/">http://research.microsoft.com/en-us/um/cambridge/projects/infernet/</a>&nbsp;</p>
<h3>Presentation</h3>
<p>Lots of tools work in presenting the data once you have done the primary analysis. In fact, there&rsquo;s a great video of a comparison of various tools here: <a href="http://msbiacademy.com/Lesson.aspx?id=73">http://msbiacademy.com/Lesson.aspx?id=73</a> Primarily focused on Business Intelligence. That term itself is now not as completely defined, but the tools I&rsquo;ll show below can be used in multiple ways - not just traditional Business Intelligence scenarios. Application Programming Interfaces (API&rsquo;s) can also be used for presentation; but I&rsquo;ll focus here on &ldquo;out of the box&rdquo; tools.</p>
<p><strong><span style="color:#800000;">Excel</span></strong> - Microsoft&rsquo;s Excel can be used not only for single-desk analysis of data sets, but with larger datasets as well. It has interfaces into SQL Server, Analysis Services, can be connected to the PDW, and is a first-class job submission system for the Windows HPC Server. You can watch a video about Excel and big data here: <a href="http://www.microsoft.com/en-us/showcase/details.aspx?uuid=e20b7482-11c9-4965-b8f0-7fb6ac7a769f">http://www.microsoft.com/en-us/showcase/details.aspx?uuid=e20b7482-11c9-4965-b8f0-7fb6ac7a769f</a>&nbsp;and you can also connect Excel to Hadoop: <a href="http://social.technet.microsoft.com/wiki/contents/articles/how-to-connect-excel-to-hadoop-on-azure-via-hiveodbc.aspx">http://social.technet.microsoft.com/wiki/contents/articles/how-to-connect-excel-to-hadoop-on-azure-via-hiveodbc.aspx</a></p>
<p><strong><span style="color:#800000;">Reporting Services</span></strong> - Reporting Services is a SQL Server tool that can query and show data from multiple sources, all at once. It can also be used with Analysis Services. You can read more about Reporting Services here: <a href="http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/reporting-services.aspx">http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/reporting-services.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">Power View</span></strong> - Power View is a &ldquo;Self-Service&rdquo; Business Intelligence reporting tool, which can work with on-premises data in addition to SQL Azure and other data. You can read more about it and see videos of Power View in action here: <a href="http://www.microsoft.com/sqlserver/en/us/future-editions/business-intelligence/SQL-Server-2012-reporting-services.aspx">http://www.microsoft.com/sqlserver/en/us/future-editions/business-intelligence/SQL-Server-2012-reporting-services.aspx</a>&nbsp;</p>
<p><strong><span style="color:#800000;">SharePoint Services -</span></strong> Microsoft has rolled several capable tools in SharePoint as &ldquo;Services&rdquo;. This has the advantage of being able to integrate into the working environment of many companies. You can read more about&nbsp; lots of these reporting and analytic presentation tools here: <a href="http://technet.microsoft.com/en-us/sharepoint/ee692578">http://technet.microsoft.com/en-us/sharepoint/ee692578</a>&nbsp;</p>
<p>This is by no means an exhaustive list - more capabilities are added all the time to Microsoft&rsquo;s products, and things will surely shift and merge as time goes on. Expect today&rsquo;s &ldquo;Big Data&rdquo; to be tomorrow&rsquo;s &ldquo;Laptop Environment&rdquo;.</p><img src="http://sqlblog.com/aggbug.aspx?PostID=41832" width="1" height="1">AzureBusiness IntelligenceCloudCloud ComputingConceptsDataData ProfessionalDesignDeveloperMicrosoftSQL AzureStorageWindows 2008Windows AzureThe Data Scientisthttp://sqlblog.com/blogs/buck_woody/archive/2011/11/15/the-data-scientist.aspxTue, 15 Nov 2011 15:00:18 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:39814BuckWoody1http://sqlblog.com/blogs/buck_woody/comments/39814.aspxhttp://sqlblog.com/blogs/buck_woody/commentrss.aspx?PostID=39814<p>A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I <em>think</em> it means, and why I’m excited about it.</p> <p>In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. </p> <p>The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. </p> <p>I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. </p> <p><strong>Persistence</strong></p> <p>The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? </p> <p>This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization.</p> <p>Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. </p> <p><strong>Processing</strong></p> <p>Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. </p> <p>A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. </p> <p><strong>Presentation</strong></p> <p>This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. </p> <p>This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. </p> <p>And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. </p> <p><strong>Why I’m excited</strong></p> <p>I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. </p> <p>Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. </p> <p>So&#160; watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way. </p><img src="http://sqlblog.com/aggbug.aspx?PostID=39814" width="1" height="1">AzureBusiness IntelligenceCareerConceptsDataData ProfessionalDBADeveloperSQL AzureSQL ServerWindows AzureBig Data and the Cloud - More Hype or a Real Workload?http://sqlblog.com/blogs/buck_woody/archive/2011/10/18/big-data-and-the-cloud-more-hype-or-a-real-workload.aspxTue, 18 Oct 2011 13:57:36 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:39156BuckWoody0http://sqlblog.com/blogs/buck_woody/comments/39156.aspxhttp://sqlblog.com/blogs/buck_woody/commentrss.aspx?PostID=39156<p>Last week Microsoft announced several new offerings for “Big Data” - and since I’m a stickler for definitions, I wanted to make sure I understood what that really means. What is “Big Data”? What size hard drive is that? After all, my laptop has 1TB of storage - is my laptop “Big Data”?</p> <p>There are actually a few definitions for this term, most notably those involving the <a href="http://nosql.mypopescu.com/post/9621746531/a-definition-of-big-data" target="_blank">“Four V’s” Volume, Velocity, Variety and Variability</a>. Others <a href="http://nosql.mypopescu.com/post/10120087314/big-data-and-the-4-vs-volume-velocity-variety" target="_blank">disagree with this</a> definition. I tend to try and get things into their simplest form, so I’m using this definition for myself:</p> <p align="center"><font color="#c0504d" size="3">Big data is defined as a <em>large set </em>of <em>computationally expensive </em>data that is <em>worked on simultaneously</em>.</font> </p> <p>Let me flesh that out a&#160; little. To be sure, “Big Data” has a larger size than say a few megabytes. The reason this is important is that it takes special hardware to be able to move large sets of data around, store it, process it and so on. (<font color="#c0504d">large set</font>)</p> <p>If you store a LOT of data, but only use a small portion of it at a time, that really isn’t super-hard to do. It’s mainly a storage issue at that point. But, if you do need to work with a large portion of the data at one time, then the memory, CPU and transfer components of the system have to adapt to be responsive - new ways to work with that data (game theory, knot-algorithms, map-reduce, etc.) need to be brought into play. (<font color="#c0504d">computationally expensive</font>)</p> <p>Once that data is loaded into the processing area (memory or whatever other mechanism is used) it must be worked on in parallel to come back in a reasonable time. You have two options here - you can scale the system up with more internal hardware (CPU’s, memory and so on) or you can scale it out to have multiple systems work on it at the same time using paradigms such as map/reduce and so on. Actually, when you lay this out in an architecture diagram, scale up or out doesn’t actually change the logical structure of the process - in scale out the network becomes the bus, and the nodes become more RAM and computing power. Of course, there are changes in code for how you stitch the workload back together. (<font color="#c0504d">worked on simultaneously</font>)</p> <p>So back to the original question. Is Big Data, as I have defined it here, a workload for Windows and SQL Azure? Absolutely! In fact, it’s probably one of the main workloads, and I believe it represents the latest, and perhaps also the earliest frontier of computing. Jim <a href="http://research.microsoft.com/en-us/um/people/gray/" target="_blank">Gray, a former researcher here at Microsoft and a hero of mine, was working on this very topic.</a> I believe as he did - all computing is simply an interface over data. </p> <p>Microsoft has multiple offerings on the topic of Big Data. In posts that follow from myself and my co-workers, we’ll explore when and where you use each one. Whether you are a data professional or a developer, this is the new frontier - <a href="http://www.straightpathsql.com/archives/2011/10/microsoft-loves-your-big-data/" target="_blank">don’t wait to educate yourself</a> on how to leverage Big Data for your organization. </p> <p><strong>Hadoop on Windows Azure and SQL Server&#160; </strong>- Microsoft’s <a href="http://www.hortonworks.com/the-whys-behind-the-microsoft-and-hortonworks-partnership/" target="_blank">partnership to include Hadoop workloads on Windows Azure</a> and <a href="http://www.microsoft.com/download/en/details.aspx?id=27584" target="_blank">SQL Server/Parallel Data Warehouse (PDW)</a></p> <p><strong>LINQ to HPC </strong>- Microsoft’s High-Performance Computing SKU of <a href="http://blogs.technet.com/b/windowshpc/archive/2011/05/20/dryad-becomes-linq-to-hpc.aspx" target="_blank">HPC is now in Azure</a></p> <p><strong>Windows Azure Table Storage </strong>- A <a href="http://msdn.microsoft.com/en-us/library/windowsazure/hh508997.aspx" target="_blank">key/value pair type storage with full partitioning</a> that is immediately consistent, able to handle huge loads of data and works with any REST-compatible language</p> <p>&#160;<strong>Other offerings </strong>- Including the new <a href="http://www.microsoft.com/en-us/sqlazurelabs/default.aspx" target="_blank">Data Explorer</a>, <a href="http://research.microsoft.com/en-us/news/headlines/daytona-071811.aspx" target="_blank">Project Daytona (with a Big Data Toolkit for Scientists and researchers)</a>, <a href="http://www.microsoft.com/sqlserver/en/us/future-editions/SQL-Server-2012-breakthrough-insight.aspx" target="_blank">Power View</a> and more. </p> <p>The era of Big Data is here. And you can use Windows and SQL Azure to bring it to your organization. </p><img src="http://sqlblog.com/aggbug.aspx?PostID=39156" width="1" height="1">AzureAzure Use CasesCareerCloudCloud ComputingConceptsConferencesDataData ProfessionalDBADeveloperMicrosoftPASSPolicy Based ManagementSQL AzureSQL ServerSQLServerStorageWindows AzureUsing the @ in SQL Azure Connectionshttp://sqlblog.com/blogs/buck_woody/archive/2011/06/21/using-the-in-sql-azure-connections.aspxTue, 21 Jun 2011 13:49:00 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:36390BuckWoody1http://sqlblog.com/blogs/buck_woody/comments/36390.aspxhttp://sqlblog.com/blogs/buck_woody/commentrss.aspx?PostID=36390<p>The other day I was working with a client on an application they were changing to a hybrid architecture &ndash; some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It&rsquo;s a simple addition to the connection string, depending on the library you use.</p>
<p>Which brought up another interesting point. They had been using something that looked like this, using the .NET provider:</p>
<div class="csharpcode">
<pre class="alt">Server=tcp:[serverName].database.windows.net;Database=myDataBase;</pre>
<pre class="alt">User ID=LoginName;Password=myPassword;</pre>
<pre class="alt">Trusted_Connection=False;Encrypt=True;</pre>
</div>
<p><span class="cs_v">This includes most of the formatting needed for SQL Azure. It specifies <strong>TCP</strong> as the transport mechanism, the <strong>database name</strong> is included, <strong>Trusted_Connection</strong> is <strong>off</strong>, and <strong>encryption</strong> is <strong>on</strong>. But it needed one more change: </span></p>
<div class="csharpcode">
<pre class="alt">Server=tcp:[serverName].database.windows.net;Database=myDataBase;</pre>
<pre class="alt">User ID=[LoginName]@[serverName];Password=myPassword;</pre>
<pre class="alt">Trusted_Connection=False;Encrypt=True;</pre>
</div>
<p><span class="cs_v"><span class="cs_v">Notice the difference? It&rsquo;s the <em><strong>User ID</strong> </em>parameter. It includes the <strong>@</strong> symbol and the <strong>name of the server</strong> &ndash; not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other?</span></span></p>
<p><span class="cs_v"><span class="cs_v">It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don&rsquo;t list them here) the server name parameter isn&rsquo;t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters &ndash; so take the @ symbol and the server name into consideration for user names. </span></span></p>
<p><span class="cs_v"><span class="cs_v">The user connection info is detailed here: <span style="font-family:'Calibri','sans-serif';font-size:11pt;mso-fareast-font-family:'Times New Roman';mso-bidi-font-family:'Times New Roman';mso-ansi-language:en-us;mso-fareast-language:en-us;mso-bidi-language:ar-sa;"><a href="http://msdn.microsoft.com/en-us/library/ee336268.aspx"><span style="text-decoration:underline;"><span style="color:#0000ff;font-family:Times New Roman;">http://msdn.microsoft.com/en-us/library/ee336268.aspx</span></span></a>&nbsp;</span></span></span><span class="cs_v"><span class="cs_v">Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space&hellip;</span></span></p>
<p><span class="cs_v">&nbsp;</span></p><img src="http://sqlblog.com/aggbug.aspx?PostID=36390" width="1" height="1">Best PracticesCloudCloud ComputingConnectionsData ProfessionalDBASQL AzureSQL Azure Use Case: Shared Storage Applicationhttp://sqlblog.com/blogs/buck_woody/archive/2011/04/26/sql-azure-use-case-shared-storage-application.aspxTue, 26 Apr 2011 13:33:50 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:35207BuckWoody0http://sqlblog.com/blogs/buck_woody/comments/35207.aspxhttp://sqlblog.com/blogs/buck_woody/commentrss.aspx?PostID=35207<p><span style="font-size:x-small;"><em><span style="font-size:small;">This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: </span><a href="http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx"><span style="font-size:small;"><u><font color="#800080">http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx</font></u></span></a><span style="font-size:small;"> </span></em></span></p> <p><strong><span style="font-size:small;">Description:</span></strong></p> <p><span style="font-size:small;">On-premise data will be a part of computing for quite some time – perhaps permanently. Bandwidth requirements, security, or even financial considerations for large data sets often dictate that relational (on non-relational) systems will be maintained locally in many organizations, especially in enterprise computing. </span></p> <p><span style="font-size:small;">But distributed data systems are useful in many situations. Organizations may wish to store a portion of data off-site, either for sharing the data with other applications (including web-based applications) or as a supplement to a High-Availability and Disaster Recovery (HADR) strategy.</span></p> <span style="font-size:small;"> <p><strong><span style="font-size:small;">Implementation:</span></strong></p> <p><span style="font-size:small;">SQL Azure can be used to add an additional option to an HADR strategy by copying off portions (or all) of an on-premise database system.</span></p> <p><span style="font-size:small;"><a href="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/3386.sql_2D00_aHADR_5F00_2.png"><img style="background-image:none;border-bottom:0px;border-left:0px;padding-left:0px;padding-right:0px;display:inline;border-top:0px;border-right:0px;padding-top:0px;" title="sql-aHADR" border="0" alt="sql-aHADR" src="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/4265.sql_2D00_aHADR_5F00_thumb.png" width="298" height="181" /></a></span></p> <p><span style="font-size:small;">In this arrangement, on-premise systems remain as they are. Data is replicated using many technologies, such as SQL Server Integration Services (SSIS), scripts, or Microsoft’s Sync Framework to a SQL Azure database. This data can be kept “cold”, meaning that a manual process is required to bring the data back, or as a “warm” standby using connection string management in the application.</span></p> <p><span style="font-size:small;">Recently we architected a solution where a company kept a rolling two-week window of data replicated to SQL Azure using the <a href="http://msdn.microsoft.com/en-us/sync/default.aspx" target="_blank">Sync Framework</a>. The application, a compiled EXE running on user’s systems, had a “switch connections” button, that allowed the users to take a laptop to another location, select that option, and continue working from anywhere they had Internet connectivity. This required forethought and planning, and did not replace their primary HADR systems, but it did allow them to continue operations in the case of a severe outage at multiple sites. Since they are an emergency services provider, this gave them the highest redundancy.</span></p> <p><span style="font-size:small;">Another option is to amalgamate data from disparate sources. </span></p> <p><span style="font-size:small;"><a href="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/6320.sql_2D00_aHyb_5F00_2.png"><img style="background-image:none;border-bottom:0px;border-left:0px;padding-left:0px;padding-right:0px;display:inline;border-top:0px;border-right:0px;padding-top:0px;" title="sql-aHyb" border="0" alt="sql-aHyb" src="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/2625.sql_2D00_aHyb_5F00_thumb.png" width="342" height="134" /></a></span></p> <p><span style="font-size:small;">In this arrangement, two or more data services (one of which is SQL Azure) are accessed by a single program. The program queries each system independently, and using LINQ a single query can work across all of the data, assuming there is some sort of natural or artificial “key” that can join the data sets together. The user programs simply view this single data set as a single data source, unaware of the underlying data sets. This allows great flexibility and agility in the downstream program. The upstream data sources can change as long as the elements are kept consistent.</span></p> <p><span style="font-size:small;">There are performance and security implications to amalgamated data systems, but if architected carefully they provide multiple benefits. A few of of these are that other systems can access the individual data sources, reporting is simplified and standardized, and multiple copies of data are eliminated.</span></p> <span style="font-size:small;"> <p><strong><span style="font-size:small;">Resources:</span></strong></p> <p><span style="font-size:small;">You can read more about the Sync Framework and SQL Azure here: <a href="http://social.technet.microsoft.com/wiki/contents/articles/sync-framework-sql-server-to-sql-azure-synchronization.aspx">http://social.technet.microsoft.com/wiki/contents/articles/sync-framework-sql-server-to-sql-azure-synchronization.aspx</a>&#160;</span></p> <p><span style="font-size:small;">If you are new to LINQ, you can find more resources on it here: <a href="http://msdn.microsoft.com/en-us/library/bb308959.aspx">http://msdn.microsoft.com/en-us/library/bb308959.aspx</a>&#160;</span></p> </span></span><img src="http://sqlblog.com/aggbug.aspx?PostID=35207" width="1" height="1">AzureCloudCloud ComputingConceptsDataData ProfessionalDesignDeveloperDisaster RecoveryLearning PlanPlatform IndependenceSQL AzureSQL ServerSSISWindows AzureSQL Azure Use Case: Web-based Applicationshttp://sqlblog.com/blogs/buck_woody/archive/2011/04/19/sql-azure-use-case-web-based-applications.aspxTue, 19 Apr 2011 14:38:40 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:35022BuckWoody0http://sqlblog.com/blogs/buck_woody/comments/35022.aspxhttp://sqlblog.com/blogs/buck_woody/commentrss.aspx?PostID=35022<p><strong><em></em></strong></p> <p><span style="font-size:x-small;"><em><span style="font-size:small;">This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: </span><a href="http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx"><span style="font-size:small;"><u><font color="#800080">http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx</font></u></span></a><span style="font-size:small;"> </span></em></span></p> <p><strong><span style="font-size:small;">Description:</span></strong></p> <p><span style="font-size:small;">Some applications lend themselves for the entire architecture to be placed on an outside provider such as Azure. And in some cases, you’re interested in using a Relational Database Management System (RDBMS):</span></p> <ul> <li><span style="font-size:small;">Web application with fast meta-data search requirements </span></li> <li><span style="font-size:small;"><span style="font-size:small;">Web application requiring </span>high levels of consistency and/or atomicity</span></li> <li><span style="font-size:small;">Common-use data, shared among multiple web applications</span></li> <li><span style="font-size:small;">Durable data storage for stateless applications</span></li> </ul> <p><span style="font-size:small;">Unless you need the data to be kept local (see the hybrid application use-case), SQL Azure makes a good fit.SQL Azure shares the same logical “backbone” as Windows Azure, so an Azure application that needs structured storage, either exclusively or alongside Blob or Windows Azure Table storage (Key/Value pair, more akin to NoSQL in architecture than RDBMS).</span></p> <p><strong><span style="font-size:small;">Implementation:</span></strong></p> <p><span style="font-size:small;">This is actually one of the easiest concepts to display for a SQL Azure architecture. It’s logically the same as keeping the application completely local, with the exception that you don’t have to install or maintain anything. Note that although Windows Azure applications are a common use, you can use any web program to access the SQL Azure database:</span></p> <p><span style="font-size:small;"><a href="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/5850.webapp_5F00_2.png"><img style="background-image:none;border-bottom:0px;border-left:0px;padding-left:0px;padding-right:0px;display:inline;border-top:0px;border-right:0px;padding-top:0px;" title="webapp" border="0" alt="webapp" src="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/8510.webapp_5F00_thumb.png" width="313" height="72" /></a></span></p> <p><span style="font-size:small;"></span></p> <p><span style="font-size:small;">Considerations here include the decisions on when to use structured storage for a datum or some other storage. In many configurations, you might want multiple storage paradigms. Here is one such example architecture, although many others are possible:</span></p> <p><span style="font-size:small;"><a href="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/2555.webapp2_5F00_2.png"><img style="background-image:none;border-bottom:0px;border-left:0px;padding-left:0px;padding-right:0px;display:inline;border-top:0px;border-right:0px;padding-top:0px;" title="webapp2" border="0" alt="webapp2" src="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/1157.webapp2_5F00_thumb.png" width="572" height="200" /></a></span></p> <p><span style="font-size:small;">In this diagram I’m indicating a simple shopping-cart application. A</span><span style="font-size:small;"> Windows Azure Web Role provides a “front end” or presentation layer to the client. A Worker Role provides computation functions, and the Queue maintains the state information so that the application is scalable. SQL Azure stores meta-data about the items in a catalogue of items a user can purchase, such as name, size, price and so on. This provides fast lookup, and allows re-use of code that existed on an on-premise SQL Server.</span></p> <p><span style="font-size:small;">Once the item is located, a reference in a SQL Azure column (from a standard SQL query) locates the GUID for the object’s picture, stored in Windows Azure Blob storage, and displays that to the user. The Worker Role moves the information for the customer’s order from the Queue to a Windows Azure Table object.</span></p> <p><span style="font-size:small;">Of course, you could architect all of these data elements into only one or another kind of storage. In this case, the cost, performance and other characteristics of each data requirement dictated this selection.</span></p> <p><strong><span style="font-size:small;">Resources: </span></strong></p> <p><span style="font-size:small;">Storage and their abstractions: <a href="http://blogs.msdn.com/b/windowsazurestorage/archive/2010/05/10/windows-azure-storage-abstractions-and-their-scalability-targets.aspx">http://blogs.msdn.com/b/windowsazurestorage/archive/2010/05/10/windows-azure-storage-abstractions-and-their-scalability-targets.aspx</a>&#160;</span></p><img src="http://sqlblog.com/aggbug.aspx?PostID=35022" width="1" height="1">Application ArchitectureAzureCloudCloud ComputingConceptsData ProfessionalDesignDeveloperDevelopmentLearningSQL AzureWindows AzureSQL Azure Use Case: Shared Data Hubhttp://sqlblog.com/blogs/buck_woody/archive/2011/04/05/sql-azure-use-case-shared-data-hub.aspxTue, 05 Apr 2011 14:10:50 GMT21093a07-8b3d-42db-8cbf-3350fcbf5496:34672BuckWoody0http://sqlblog.com/blogs/buck_woody/comments/34672.aspxhttp://sqlblog.com/blogs/buck_woody/commentrss.aspx?PostID=34672<p><span style="font-size:x-small;"><em><span style="font-size:small;">This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: </span><a href="http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx"><span style="font-size:small;"><u><font color="#800080">http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx</font></u></span></a><span style="font-size:small;"> </span></em></span></p> <p><strong><span style="font-size:small;">Description:</span></strong></p> <p><font size="2">Organizations often need to share all or part of a data set, which is consumed by other systems. These systems can be on-premise or at another location, or even at a different organization. </font></p> <p><font size="2">Many times these systems use a well-defined data interchange system, such as EDI or other standards. In the case of a trusted system, simply using a direct connection into another database is the process used to transfer data. This process might be one-way or bi-directional.</font></p> <p><font size="2">But there are systems that transfer data back and forth in stages using intermediate systems. A typical data flow in this case looks similar to the following:</font></p> <p><font size="2"><a href="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/7823.SADH_2D00_1_5F00_2.png"><img style="background-image:none;border-bottom:0px;border-left:0px;padding-left:0px;padding-right:0px;display:inline;border-top:0px;border-right:0px;padding-top:0px;" title="SADH-1" border="0" alt="SADH-1" src="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/7206.SADH_2D00_1_5F00_thumb.png" width="550" height="227" /></a></font></p> <p><font size="2">In this example, the owning system contains data set A. This is set to a staging system or server, where the receiving system collects it. The receiving system contains data set B and works with data set A to create a new data set, C. This new data is consumed by the original system to complete the cycle. A concrete example is an inventory control system. Data set A is the original inventory list, shipped to a manufacturer. The manufacturer consumes the inventory available, orders and components, and returns the ordering bid with any changes to the staging server as data set C. The data is consumed by the originating system and components are noted in the overall flow of data set A.</font></p> <blockquote> <p><font size="2"><em>Note: Normally this is solved with a full EDI implementation, but this process is still a common practice.</em></font></p> </blockquote> <p><font size="2">There are other examples, but the general concept is one where the need is for two, possibly untrusted systems to share a common source of data.</font></p> <p><strong><span style="font-size:small;">Implementation:</span></strong> </p> <p><font size="2">One possible solution is to segregate the data that is being transferred into an agree-upon set of entities that can be added or edited real-time, where both systems (or many) feed from the same data set instead of shipping the data. This removes latency, improves data quality, and shares the cost of the data. Also, security is increased because there are no shared logins - each firm gets its own.</font></p> <p><font size="2"><a href="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/6232.SADH_2D00_2_5F00_2.png"><img style="background-image:none;border-bottom:0px;border-left:0px;padding-left:0px;padding-right:0px;display:inline;border-top:0px;border-right:0px;padding-top:0px;" title="SADH-2" border="0" alt="SADH-2" src="http://blogs.msdn.com/cfs-file.ashx/__key/CommunityServer-Blogs-Components-WeblogFiles/00-00-00-79-79-metablogapi/0044.SADH_2D00_2_5F00_thumb.png" width="412" height="283" /></a></font></p> <p><font size="2">One consideration with this layout is that the source systems must be altered to use a shared data set. If this is not possible, this is still a possibility as the system can be used as it was before - a data transfer - but the data can be cleansed real-time by both systems. It’s also a more secure and shared-cost system even if used in the original manner.</font></p> <p><font size="2"><strong><span style="font-size:small;">Resources:</span></strong></font> <p><font size="2">Security is a concern in this arrangement, so it’s best to understand exactly how the security works in SQL Azure: <a href="http://msdn.microsoft.com/en-us/library/ff394108.aspx">http://msdn.microsoft.com/en-us/library/ff394108.aspx</a>&#160;</font></p> <p>Another possibility to solve this pattern is to use Data Sync, in many different arrangements that involve SQL Azure. You can learn more about it here: <a href="http://blogs.msdn.com/b/sync/archive/2010/10/07/windows-azure-sync-service-demo-available-for-download.aspx">http://blogs.msdn.com/b/sync/archive/2010/10/07/windows-azure-sync-service-demo-available-for-download.aspx</a></p></p><img src="http://sqlblog.com/aggbug.aspx?PostID=34672" width="1" height="1">AzureAzure Use CasesCloudCloud ComputingConceptsDataData ProfessionalSQL Azure