I don’t agree – consider any successful CEO, musician, or master-level craftsperson – those people may have been doing the same thing for decades. They certainly haven’t damaged their careers by being the best in the world at what they do.

But I do think that if you’ve had the exact same role at a company for more than a few years, you should be able to show that you’ve done new things and been able to adapt to change. Once, when I was a manager, I had an employee tell me that he wanted a raise because he had "10 years of experience". Sadly, what he really had was 1 year of experience, 10 times. Personally, I’ve had a lot of jobs, in companies large and small, and even my own company. My track record was anywhere from 2 to 6 years at a single company.

On October 1 of this year, I start another chapter in my career. No, I’m not leaving Microsoft, but I am changing jobs. That’s one of the advantages of working at a large company – it’s so large that there are a lot of different jobs that you can do without leaving. In fact, Microsoft in particular encourages us to change roles from time to time, so that we cross-pollinate within the company and challenge ourselves constantly to learn new things. In my seven years here I’ve worked on the SQL Server Product Team, as a field technical resource for SQL Server and then Windows Azure, and now I’m taking a global corporate role.

I’m joining the Office of the CTO for the Windows Azure Platform as a Worldwide Senior Technical Specialist. Our team has three charters: we focus on supporting customers on critical architectures, our internal engineering teams on the platform for new designs, and the field teams as subject matter experts in various areas of Windows Azure.

My area of specialization will be data – SQL Server, Oracle, MongoDB, PostgreSQL, Linux Hadoop, etc. on Windows Azure IaaS, Windows Azure Table storage, Windows Azure SQL Databases (the Artist previously known as SQL Azure), HDInsight (Hadoop), and High-Performance Computing. My career for the last 30 years has been focused on data, and now it’s focused entirely on that space – from on-premises to distributed computing.

So should you change careers every couple of years? Absolutely. Even if you stay at your current employer, you should always be stretching, learning, and moving forward.

A lot of electronic ink has been spilled over our recent (and sudden) announcement that we are retiring the highest-level certifications here at Microsoft. At the same time, I'll be teaching a one-day class at the SQL Saturday event in Cambridge, UK on certifications. Are certifications worth the effort anymore, and should you attend the pre-con session?

Yes, and yes. I've gotten lots of certifications from Microsoft, Oracle, Sun, even Novell. I even took a class when I worked at NASA on circuitry connections - we still fixed computers back then by soldering and wiring (I'm old, don't make that face). Why did I take all these certifications? Well, in some cases it was to further my career - an employer put a "Must have an MCSE" on the job requirement. But most of the time a job requirement wasn't the reason . I believe to truly excel at your career you have to constantly learn and apply new information.

Certification certainly isn't required to learn - far from it. I've always been a self-motivated learner, but having the pressure of a timeline, a structured set of goals to hit, and an exam can be very powerful. I don't like to fail. So I studied, practiced, and learned, practiced tests, and memorized information. Did the certification make me a better professional? Absolutely. Is certification the only reason and method I used to study? Not by a long shot.

Now that I'm here at Microsoft, I have to learn metric ton-loads of new tech each week. It's overwhelming at times, so you have to learn to develop a process to quickly learn and apply new information - and that's why I put this pre-con together. I'll share with you the way I learn, and you'll get the benefit of others that I've learned from here at Microsoft on their learning styles and methods.

This is a workshop-style pre-conference session to learn not only how to get a Microsoft certification in SQL Server, but how to create your own study plans for any technical discipline, using SQL Server technical information to learn during the session. The certification information is very useful - but we'll do so much more. It's not a sit-down-and-listen session - you'll be doing lots of work, developing your own plans, and even moving around a bit. You'll learn the certifications that exist for SQL Server, from the simplest through the highest certification levels (including creating your own "MCSM" program), the resources you have for studying for them, such as books, links, videos and online courses. Using a hands-on workshop process, you'll walk through these materials and get training on both test-taking and technical topics that will prepare you to take your certifications to the level you're looking for. We'll focus on the process of learning, using technical topics dealing with SQL Server with real-world application of what you learn. You'll learn to prepare for a certification, along with invaluable technical details of the SQL Server platform.

So if you're fortunate enough to be able to travel to the UK and the beautiful city of Cambridge, join the session. You'll learn one of the most important skills there is - learning to learn.

Decision analysis - Interpreting the processing of data to identify a pattern, make a prediction, and data mining

Business Intelligence - Design of exploratory data, visualizations, business and organization impacts and communication to the stakeholders of the use of data and visualization tools

There are of course other aspects of data science, but I believe this list covers the majority of skills I've seen in individuals with the Data Scientist title. And it is normally an individual, or at least a very limited group of people. as you examine the list above, you can see this person requires a fairly extensive technical background, and in the domain knowledge area in specific, there's a pretty large time element. That isn't to say a very bright person couldn't ramp up on these areas, just that having all of that in your portfolio takes time.

Given that these are the skillsets, why is cloud computing well suited to assisting in the data science function?

It's obvious that a researcher needs good Internet skills, beyond simply referencing a Wikipedia article - although that's certainly a good thing to include from time to time. While searching isn't specific to Windows Azure, there are platform components that allow the programming function to call out to the web for data access. Windows Azure includes a platform that allows languages from Python to F#, JavaScript (Including NodeJS), Java and more.

Cloud computing allows the data scientist to access data stored in Windows Azure (Blobs, Tables, Queues, RDBMS's as a service such as SQL Server and MySQL) as well as IaaS systems that can run full RDBMS systems such as SQL Server, Oracle, PostGreSQL and others. In addition, the Windows Azure Marketplace contains "Data as a Service" which has free and fee-based data to include in a single application.

The Windows Azure Service Bus allows architecting a CEP system, and using SQL Server allows the StreamInsight feature, and can communicate from on-premises, Windows Azure IaaS and PaaS, and other data sources.

For data storage and computing, Windows Azure allows everything from traditional RDBMS's as described to any NoSQL in IaaS, on both Windows and Linux operating systems. Statistical packages such as "R" are also supported. The elasticity allows the data scientist to spin up huge clusters, such as Hadoop or other NoSQL offerings, perform some analysis, and then stop the process when complete, saving cost, and bypassing the internal IT systems (which may have its own dangers, to be sure). Windows Azure also offer the High Performance Computing (HPC) computing version of Windows Server on Windows Azure, for large-scale massively parallel data processing, in constant and "burst" modes.

In addition, Windows Azure has many services, such as the HDInsight Service (Hadoop on demand) and other analysis offerings that don't even require the data scientist to stand up and manage a Virtual Machine in IaaS. For visualization, Microsoft has included the ability to use Excel with the HDInight Service, and of course that works with all Microsoft Business Intelligence functions, and there are several other data visualization tools such as Power View . You can enter the tools you have in the Microsoft stack in this tool (http://www.microsoft.com/en-us/bi/Products/bi-solution-builder.aspx) for more on the visualization options you have. The data scientist can also build visualizations in web pages, on iPhone, Android or Windows mobile devices, or in full client-code installations.

Because the need for elasticity, multiple operating systems, and changing landscapes for data and processing, data science is well served by cloud computing - and in Windows Azure in particular because of the services and features offered, not only on Microsoft Windows but Open Source.

(Note - I'll add to this post as new information is updated - latest post date is August 8th, 2013)

I've been working on cloud projects of all types for over three years now. Along the way, I've learned some basic patterns that make for a successful project - and also the things to avoid. The general steps depend a great deal on whether the project is an Infrastructure, Platform or Service deployment, and also if it is a hybrid or completely cloud architecture. In all cases, what you do before deploying the system - the "plumbing", if you will - turns out to be the key for a successful deployment.

If this is your first cloud deployment, I recommend working with your local Microsoft team or a partner you trust that has Windows Azure experience. They can help you through the process, and then you can take over from there. It's a far faster, successful route to get a good deployment quickly.

Accounts and Billing

Probably the most non-technical parts of the project that causes the most issues is setting up an account for the cloud provider and how you pay for it. But this needs to be done first. You have three progressions: No account (everything local), optionally an MSDN account for Dev and Test, and then on to the production account.

Step One - Local Dev and Test

There are a couple of dependencies here. If you're looking at an Infrastructure as a Service (Iaas) deployment of Virtual Machines, you'll use Hyper-V to create images whether that's on-premises or through the portal on Windows Azure. For a local system, simply create the VM's using Hyper-V using the sizes and requirements shown here: http://www.windowsazure.com/en-us/manage/windows/common-tasks/upload-a-vhd/.

If you're deploying a Platform as a Service (PaaS) application, it's also quite simple. Download the Software Development Kit (SDK) from here: http://www.windowsazure.com/en-us/downloads/?sdk=net and then write your code. When you run the code, a Windows Azure emulator will fire up right on your laptop.

For Software as a Service (SaaS) offerings such as Windows Azure Media Services or HDInsight (Hadoop), there is no local testing other than the code or scripts you want to run. You'll simply skip to step three.

Step Two - Dev and Test on Azure

You have two options for your development and testing environment. The first is to use your Microsoft Developer Network (MSDN) account, if you have one. If you do, you have "free" Windows Azure time built right in. It's not a separate Windows Azure or any different than production Windows Azure - it's the same data centers, servers, services and so on, it's just billed differently.

If you don't have an MSDN subscription, you'll need to create a regular, billable account, for Dev and Test. This is the same process I'll describe below.

Step Three - Deploy to Production

To set up an account, you'll need to figure out how your company wants to pay for it. Remember, this is a "pay as you go" model, so the two routes you have are to pay a monthly bill (using a corporate credit card or a purchase order) or you can pay ahead of time and use the money throughout the year on an "Enterprise Agreement (EA). Get with your local Microsoft team to work out the best route and price. The general process is detailed here: http://www.windowsazure.com/en-us/develop/net/tutorials/create-a-windows-azure-account/

Figure out who will control these accounts right from the start. In general, one person should control Dev and Test, another should control production. In any case, determine this before you start - I've seen projects fail not because of technical reasons, but because no one checked on whether they could pay for the service.

Speaking of pricing, there are a couple of simple calculators you can use. If you followed the process above, you already have an idea of which resources you are using, and how much of each you have used in testing. From there you can plug in the usage numbers from Dev and Test to get a prediction of how much production will cost.

Resources:

Planning and Education

In all cases, you need to start with a good plan. It's true that you don't always know what you don't always know, so you'll need to allow for some amount of adjustment. You still need to start with a good plan, however, even before you know what Windows Azure is or how it works. Your plan should start with what the project does when it is successful. That allows you to use the right technology to accomplish the goal.

I can't overemphasize this step enough. It sounds simple - certainly you know what you want the system to do, right? So often I have seen teams start with how the system should work before they consider the hard and fast requirements of the system. And sometimes teams are unwilling to try some other technology to solve the problem, instead clinging to the technology they know or like best.

After you have a solid understanding of the success metrics, it's time to start learning. The route I recommend is an overview of the platform's capabilities, and then a focus on the components you can use in your solution.

Connectivity

With Windows Azure, you can set up three types of connectivity from your on-premises to Windows Azure VM's. The first is to use a public-facing TCP/IP address. While this isn't the most secure route, it does have specific use-cases, such as a public-facing web application that you want to access from your internal systems. The Portal will show the public IP your system is assigned, and then you have control over whether any endpoints are exposed - from there you can map them to your internal endpoints on the Virtual Machine, or even load-balance them if you like. More on that here: http://www.windowsazure.com/en-us/manage/windows/how-to-guides/setup-endpoints/

The second method of connectivity is to set up a site-to-site VPN. In this option the Virtual Network you created in Azure (along with the VM's you put in that Virtual Network) are connected directly to your internal TCP/IP network - making a secure, transparent connection. The process for this connection is here: http://msdn.microsoft.com/en-us/library/windowsazure/jj156075.aspx.

Security

If you want single-sign-on from your local Active Directory, you have two choices. One is to follow the process above to create the VPN connection, and then deploy a Virtual Machine in Windows Azure and run dcpromo on it. From there it's similar to an on-premises AD server.

PaaS Deployment

For a PaaS deployment, the primary plumbing considerations are the accounts and billing decisions I described above, security, and DevOps. Accounting and billing can be more challenging in a PaaS environment since you aren't always sure how much the service will be used and when. To gain more accurate predictions, you need to place monitoring and metrics right into your code. Your primary knobs and controls fall under Windows Azure Diagnostics - more on that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg433048.aspx. Start with the main topic and follow *all* the links on the left-hand side of that page.

I’m catching up on a bunch of features, functions, updates and more learning from the TechEd Event in New Orleans recently. In fact, videos, Windows Azure documentation, and of course blogs are the new way to keep up – books are just too slow to produce to handle the pace. I thought I’d share the links I’m using:

This a decidedly non-technical post, and even a little preachy. I post it here because you, the technical professional, are the perfect audience for it.

I have enough stuff. I never think so, of course, but I do. I don’t consider myself rich, but if you have a comfortable place to sleep, enough food to eat and you can plan for your future, you are rich. And when we are rich enough to have “enough” stuff, that usually means we have too much stuff.

Stuff costs money that could be put to better use, stuff needs painting, cleaning, fueling, feeding, storage and caring for. Stuff is a burden. So I decided a few years back that I had enough stuff. We gave away a lot of things, and we don’t buy any new (meaning we didn’t have one before) things – only replacement things. We’d rather “do something” than “have something”. But even so, when birthdays, anniversaries and Christmas rolled around, we got more stuff. So I asked all of my friends and relatives to do something for me.

I ask folks that want to give me a gift (for whatever reason) to donate the price they would have paid for the gift to a charity they care about. This does a few things:

They have to find a charity to care about

The fact that I made it through a calendar year now actually means something

Someone else gets the help they need

Everybody feels better

No, I’m not saying these things so you’ll think I’m a wonderful person - the reason I’m posting this here is that as a technical professional you probably have enough stuff like I do. So I ask you to try this out. Try it for one birthday, or one Holiday, or even for a year. I can promise this: it will change your life, the life of the person who gives the gift, and the person’s life who receives it. If you do try it, I’d love to have a comment here on your thoughts.

If you've been redirected here because you posted on a forum, or asked a question in an e-mail, the person wanted you to know how to get help quickly from a group of folks who are willing to do so - but whose time is valuable. You need to put a little effort into the question first to get others to assist. This is how to do that. It will only take you a moment to read...

1. State the problem succinctly in the title

When an e-mail thread starts, or a forum post is the "head" of the conversation, you'll attract more helpers by using a descriptive headline than a vague one.

This: "Driver for Epson Line Printer Not Installing on Operating System XYZ"

Not this: "Can't print - PLEASE HELP"

2. Explain the Error Completely

Make sure you include all pertinent information in the request. More information is better, there's almost no way to add too much data to the discussion. What you were doing, what happened, what you saw, the error message, visuals, screen shots, whatever you can include.

This: "I'm getting error '5203 - Driver not compatible with Operating System since about 25 years ago' in a message box on the screen when I tried to run the SETUP.COM file from my older computer. It was a 1995 Compaq Proliant and worked correctly there.."

Not this: "I get an error message in a box. It won't install."

3. Explain what you have done to research the problem

If the first thing you do is ask a question without doing any research, you're lazy, and no one wants to help you. Using one of the many fine search engines you can most always find the answer to your problem. Sometimes you can't. Do yourself a favor - open a notepad app, and paste the URL's as you look them up. If you get your answer, don't save the note. If you don't get an answer, send the list along with the problem. It will show that you've tried, and also keep people from sending you links that you've already checked.

This: "I read the fine manual, and it doesn't mention Operating System XYZ for some reason. Also, I checked the following links, but the instructions there didn't fix the problem: "

Not this: <NULL>

4. Say "Please" and "Thank You"

Remember, you're asking for help. No one owes you their valuable time. Ask politely, don't pester, endure the people who are rude to you, and when your question is answered, respond back to the thread or e-mail with a thank you to close it out. It helps others that have your same problem know that this is the correct answer.

This: "I could really use some help here - if you have any pointers or things to try, I'd appreciate it."

Not this: "I really need this done right now - why are there no responses?"

This: "Thanks for those responses - that last one did the trick. Turns out I needed a new printer anyway, didn't realize they were so inexpensive now."

Not this: <NULL>

There are a lot of motivated people that will help you. Help them do that.

Normally I try to put topics in the positive in other words "Do this" not "Don't do that". Sometimes its clearer to focus on what *not* to do. Popular development processes often start with screen mockups, or user input descriptions. In a scale-out pattern like Cloud Computing on Windows Azure, that's the wrong place to start.

Start with the Data

Instead, I recommend that you start with the data that a process requires. That data might be temporary or persisted, but starting with the data and its requirements helps to define not only the storage engine you need but also drives everything from security to the integrity of the application. For instance, assume the requirements show that the user must enter their phone number, and that this datum is used in a contact management system further down the application chain. For that datum, you can determine what data type you need (U.S. only or International?) the security requirements, whether it needs ACID compliance, how it will be searched, indexed and so on. From one small data point you can extrapolate out your options for storing and processing the data. Here's the interesting part, which begins to break the patterns that we've used for decades: all of the data doesn't have the same requirements. The phone number might be best suited for a list, or an element, or a string, with either BASE or ACID requirements, based on how it is used. That means we don't have to dump everything into XML, an RDBMS, a NoSQL engine, or a flat file exclusively. In fact, one record might use all of those depending on the use-case requirements.

Next Is Data Management

With the data defined, we can move on to how to store the data. Again, the requirements now dictate whether we need a full relational calculus or set-based operations, or we can choose another method based on the requirements for the data. And breaking another pattern its OK to store in more than once, in more than one location. We do this all the time for reporting systems and Business Intelligence systems, so this is a pattern we need to think about even for OLTP data.

Move to Data Transport

How does the data get around? We can use a connection-based method, sending the data along a transport to the storage engine, but in some cases we may want to use a cache, a queue, the Service Bus, or Complex Event Processing.

Finally, Data Processing

Most RDBMS engines, NoSQL, and certainly Big Data engines not only store data, but can process and manipulate it as well. Its doubtful that you'll calculate that phone number right? Well, if you're the phone company, you most certainly will. And so we see that even once we've chosen the data type, storage and engine, the same element can have different computing requirements based on how it is used.

Sure, We Need A Front-End At Some Point

Not all data is entered by human hands in fact most data isn't. We don't really need a Graphical User Interface (GUI) we need some way for a GUI to get data into and out of the systems listed earlier.

But when we do need to allow users to enter or examine data, that should be left to the GUI that best fits the device the user has. Ever tried to use an application designed for a web browser on a phone? Or one designed for a tablet on a phone? Its usually quite painful. The siren song of "We'll just write one interface for all devices" is strong, and has beguiled many an unsuspecting architect. But they just don't work out.

Instead, focus on the data, its transport and processing. Create API calls or a message system that allows for resilient transport to the device or interface, and let it do what it does best.

“Case Studies” are a great tool when you’re evaluating a platform. Having evidence that other companies have deployed Windows Azure, in addition to how they did it, is a good way to plan your own deployments or even just evaluate whether Windows Azure would be a good fit. And we have several case studies you can examine here: https://www.windowsazure.com/en-us/home/case-studies/

But there aren’t a lot of them – and there isn’t much detail on some. Why not?

Well, as to the first question, we only keep a few of these on the web at any given time. They rotate based on date, industry, and other factors. If you want more, you can contact your local Microsoft team for something more specific to your situation or industry.

But even when you do, you may not get what you’re looking for – a full-scale architecture diagram with costs, names and dates, sizes and layouts and so on. That’s a tougher thing to put on the web, and here’s why: companies are reluctant (as they should be) to include that level of detail in a public place. There are legal and competitive reasons they just can’t do that. And of course at the very beginning of any project we have to get the company to agree to do a case study, and no, we don’t pay for that. The company is going to have to let us document things, work with them, and generally get involved in the project. Not a lot of companies are willing to do that. In the end, the case studies prove out that folks in your industry are using Windows Azure successfully, and that the detail is specific to your requirements and constraints. They are very useful to the business side of the company, but not as useful to the technical folks who want details.

So we’ve stepped into that gap with more of the “real details” on how to implement a Windows Azure solution. In most cases these are live, real apps – not just theoretical or best-practices kinds of documentation. We have a few places you can check for more detail, including the Windows Azure Training Kit, and much more.

Cloud computing is actually being largely driven by the “Consumerization of IT”. That phrase, as grammatically incorrect as it is, represents a fundamental change to the way businesses think about technology, and subsequently how the IT team provides it.

Years ago, technology was introduced by the office. No one owned a mainframe at home of course, and even in the early years of PC’s few people could afford to have them in their houses. Other than game consoles and hobbyists on small computers, most full-up “PC’s” were used for work.

That rapidly changed, with the lowering of costs and miniaturization of technology. PC’s and then laptops became ubiquitous in the home, and of course the “smart phone” ushered in an entire generation where the technology available to the consumer outpaced what is installed at the place of work. Many of us have laptops that are more powerful than some of the servers the company uses in some applications.

IT as a department grew up in the era of the “office-first” technology. Modern users, especially those controlling the budget, are now more “home-first” technology buyers. In extreme cases, I’ve seen IT departments relegated to maintenance of legacy systems, with new IT projects being scoped, designed and run by business teams – usually on a Cloud Computing platform. The business wants to create a technical solution as quickly as they can download an app to their phone. They want the same level of speed and ease that they have on home technology in their business work.

However, this can be problematic if not thought through. As with any new technology, Cloud Computing provides both benefits and concerns. It’s true that almost anyone can quickly stand up a server or deploy an application quickly with nothing more than an e-mail address and a credit card. But business teams are not always aware of areas such as security or similar concerns that the IT teams solved through many hours of careful planning. Unfortunately, it’s often a matter of “Ready, Fire, Aim.”

So what is the business (who wants the agility of a smart phone and a single-click solution) to do? What about the need for security, strategic design, integration and all of the other functions that IT needs to handle? This is where I think Windows Azure (not to be too sales-y) handles the situation well.

If you’re using another cloud provider, by the way, that’s fine. The concepts here are the same.

Microsoft sells an on-premises operating system, and has done for many years. We’ve architected Windows Azure Virtual Machines, Active Directory Services, Platform-as-a-Service, and even the Hadoop and other offerings to work together – and with the tools that you use to manage them today, like System Center and PowerShell.

To the business team, I say this:

Work with your IT staff on projects, even if you’re designing the project and paying for it – the IT professionals can keep you out of danger. Most of them have made the mistakes you're going to make, and know what to do to avoid them.

Plan for the future – “This is just a proof-of-concept” project becomes productions in a frighteningly quick period of time.

Understand the cost model – a good architect can solve one problem in multiple ways, and cost is always a vector. The IT team can help you with this - they have the relationships with the vendors to consolidate and help you understand those costs.

To the IT team, I have this advice:

Don’t stand in the way of the business – they’ll just go around you. Work with them. Enable the business to do what they need, when they need it, and they’ll work with you. I've seen both results when I witnessed the mainframe-to-the-PC transition, and I'm seeing it again in the PC-to-the-cloud transition. Change is inevitable - get on board or become irrelevant to the people who pay your salary.

Learn the cloud. Talk to your vendor, get training, read up, ask questions. If this bothers the vendor, get a different one.

Create a self-service portal. This point may be the most important one. Become your own “Cloud”, and your users won’t need to go elsewhere. I’ll talk more about how to do this in another post.

In the end, the relationship between IT teams and Business is eerily similar to a marriage – it’s an amazing thing, it takes a lot of work to get right, and the "Consumerization of IT" is that cute person at the end of the bar.Work together or one of you will soon be with somebody new.

Windows Azure has added Infrastructure-as-a-Service (IaaS), the ability to deploy, run and manage Virtual Machines, to its growing list of services. You can create Virtual Machines from a gallery, upload them from images you create locally on Hyper-V (that's right, you can do that, even from PowerShell) and of course you can just jump right in and just click the "Plus" sign at the bottom of the Windows Azure Management Portal, then hit Compute, then Virtual Machine and then Quick Create. Enter a few fields and you're off to the races. (video here: http://www.youtube.com/watch?v=keGhdAqfqBA)

Of course, that works just fine - but if you do it that way you're doing it wrong. There's a better way - there are a few steps you should take before you deploy a Virtual Machine, and a few steps after. In general, the process looks like this:

Create an Affinity Group

Create a Virtual Network

Create your Storage Account and Container

Create the Virtual Machine

Optionally, add an Availability Set

Note - some of these steps need to be done only once, others once per logical group of Virtual Machines, and so on. Hit the links below for more info on when to do what.

Step One: Create an Affinity Group

An Affinity Group is a logical grouping that dictates how Windows Azure will lay out the resources assigned to it. When you create services, you can assign them to the Affinity Group, and the Fabric will deploy those into the same Datacenter cluster. Create one these per grouping that you want.

Step Two: Create a Virtual Network

The TCP/IP address for Windows Azure Virtual Machines come from a predefined range. You can just let us pick that for you, or you can create your own Virtual Network that has a user-defined range of DHCP addresses, and even place a DNS Server or connect your local network to the Windows Azure network for your Virtual Machines. When you create the Virtual Network, you can assign it to the Affinity Group. It's a way of grouping machine networks together. Create one of these per group of Virtual Machines that you want to have the same DHCP and DNS Server.

Step Three: Create a Storage Account and Container

Windows Azure Virtual Machine Disks are stored in Windows Azure Storage. That's a great benefit. If you don't define a Storage Account and a Container first, The Windows Azure Management Portal will do that for you as you create the machine. Defining that Storage Account and Container ahead of time allows more control, and a better naming convention than what we'll pick for you. Read more to find out the strategy you should use to group the disks. Also, some workloads such as SQL Server have a best-practice of creating a separate disk for data and backups.

Step Four: Create the Virtual Machine

You have a lot of choices here, from creating the Virtual Machine quickly, from a Gallery with pre-loaded software (like SQL Server), or even choosing from Windows or Linux. You can also create the Virtual Machines by uploading an image of your own, or create them through PowerShell. With the previous steps completed, you can select those pre-defined entries as you build the machine - just select them from the drop-down menus when prompted.

Step Five: Optionally, Add an Availability Set

When you build more than one Virtual Machine (always a good idea, and required for availability) you can load-balance the IP ports for them, and you can also specify that they are on separate "fault domains" for greater availability. This is called an Availability Set. Even if you think you're only going to build one VM, you can add the Availability Set it up now and use it when you grow the systems. Create one of these per group of Virtual Machines you want to add into your High Availability strategy.

Maybe you just want to cut to the chase. Windows Azure. What do I *do* with it? How about...create some websites. Or website applications. Or both. For free. OK, ten of them are free, then you have to pay for more.

This week I wanted to set up a DotNetNuke Content Management System (More here if you don't know what that is: http://www.dotnetnuke.com/Products/DotNetNuke-Platform.aspx) for a charity I work with. DotNetNuke (DNN) is an open-source project, all ready to go and easy to manage place for web parts, content, blogs, whatever. With Windows Azure, you have the ability to quickly and easily create websites based on ASP or PHP code, for free. You also have the ability to use packages from a gallery, and one of those packages is DotNetNuke - both the community and the professional (pay for) use. I set this one up in 9 minutes:

It's easy to set these up. A simple website where you can deploy ASPX or PHP code is just a few clicks, but while I was setting up my site I figured I'd grab the screenshots and show them to you here.

After you sign up for the account, hit the http://windowsazure.com site and click the "Portal" button at the top right of the screen. Then click the second icon down, called "Web Sites":

Click the "Create a New Web Site" link on the screen and you're shown this menu:

If you want a quick web site, just click "Create Web Site". If you want another type, click "From Gallery" and make your choice:

I selected the Community Edition of DotNetNuke. That brings up a configuration panel that looks like this:

You'll have to pick a name that isn't already in use, and in my case I told the system to create and build a SQL Azure (Windows Azure SQL Database) to hold the data. You'll also need to pick a region. After you make those selections, you'll need to enter the information for the database server and database:

Write down the database name, database administrator name and password - you'll need those later.

After that, you'll see the system deploying the code, creating the database server and so on.

From there, you're all set.

Whenever you want to monitor the site's health, you can just click the name here in the Portal to get more information on it:

Write down the URL of the site so you can access it in a moment. But don't move off of this screen - Windows Azure is now all set up, but DotNetNuke needs a little info when you first log in.

Before you leave the Portal, click the "DB" icon, and click the name of the database server you created a moment ago (blanked out here on my graphic):

Write down the entire server name (looks like myserver.database.windows.net) and database name (looks like mydatabasename) from this panel.

Now open your new DotNetNuke URL in your browser, and DotNetNuke will take over. You'll be asked the name of the database server (type in the whole name with the .database.windows.net part) and database name, and the database admin name and password you wrote down earlier.

You'll be asked to name the first site, and create a DotNetNuke admin name and password. Write all that down too.

Now log in to your DotNetNuke site with the admin name and change the site to whatever you like! It defaults to "Awesome Cycles", but since you probably don't want that one, read up on what to do with DNN once you're here:

Maybe you just want to cut to the chase. Windows Azure. What do I *do* with it? Let’s talk about that. One of the quickest, easiest ways to use Azure is in the storage feature, as a backup target. Can Windows Azure backup data, servers, workstations or databases? Yes. Yes it can. Windows Azure storage is replicated three times in one datacenter (on different fault-domains) and then those three are replicated to another geographically separate (but still in the same country region) location, you get six copies of the data automatically. Your data stays in the datacenter you choose, and is replicated within a geo-politically same region. So it’s actually a great target for backups.

First, you need a storage account, a container underneath that, and a Blob object to put the backups on. Here’s how you do that (for free):

Get the Account String: Open the Portal (as above), click on Storage, select the account you want, and click Manage Keys at the bottom of the screen. Copy that string to a secure place.

OK, now that you have all that, you’re all set. In fact, you’re all set for things like Web Sites, VM’s, Code Deployment and lots of other things, but let’s focus on backups first. What are your options?

Mount a Drive, Use as Backup Target

The easiest way to send files to Windows Azure is to mount the storage as if it is a local drive. You can use that as regular storage (I’ll talk more about this in my next post) but you can also use that as a drive letter where you can send backups. While that’s simple to implement, it isn’t always the most efficient – you’re going through a layer of storage abstraction. Still and all, it’s a good choice and quick and easy to implement. Here are some options:

Backup Servers and Workstations using Third-Party Software

In addition to (and including) the providers mentioned above, some also skip the step of having to mount a drive to use as a backup target, and simply allow you to mount an agent or tool that just backs up straight to Azure.

Backup Servers and Workstations using Hardware

StorSimple – a hardware appliance that can act as storage or backups, with encryption, de-duplication, compression and a Hierarchical Storage Management concept: http://www.storsimple.com/total-storage/

Backup Servers and Workstations using Data Protection Manager

Data Protection Manager is a feature that is part of the System Center suite. We’ve updated that in the latest versions that will allow you to incrementally back up Servers and even Workstations and Laptops straight to Windows Azure. The beauty of this feature is that if the user is in a remote office or traveling the data will flow up to Windows Azure from wherever they are.

"DevOps" (Short for Developer Operations) is one of a group of new terms such as "Cloud", "Big Data" and "Data Scientist" - words that are somewhere between marketing and tasks we've actually had around in other forms for years.However, working in a Distributed Environment (Both on and off premises) like Windows Azure does bring a new set of tasks to the operations we currently perform in Information Technology.

Before I offer some guidance here, I need to carefully define the term "DevOps" as I use it.There are other definitions that involve Application Lifecycle Management (ALM) and standard operations policies, and you're free to use those as well, but this is the definition I'll use for this post: By DevOps I mean those tasks involved with deploying, managing and monitoring a Windows Azure (or hybrid) project.

Another caveat: This is a non-authoritative, non-comprehensive post. I'll include only an outline of the major tasks, not a complete manual on the topic. There's enough knowledge needed on this topic for at least a whitepaper or two, and perhaps even a book, but for the moment I wanted to get some information out to ensure you have something to work from until those come along.This is primarily a list of resources for a DevOps team.

Deployment

The first task after the design of the project is deployment. The deployment method depends on the type of solution; Windows Azure has the ability to run VM's, software code, or provide services that are already created (such as Active Directory).

Whenever I present at a conference, I try and make sure to include references to the topics I discuss in the session. That means you either need a lot of handouts, or I need to wait for you to take lots of notes. While note-taking is essential, writing out web links (especially long ones) is not a good use of your time. So I post the references here on my blog, with the tag “Link Lists” and you can simply write down one small URL to get to them all.

This topic deals with the skills needed to become a data professional. I’ll include references here on the role of a data professional, and also some places where you can drill in further for the skills that you need to fill those roles. I’ll try and keep this list updated, and if you have some information on any of these topics, feel free to leave that as a comment below. This list isn’t meant to be an exhaustive web search of all the technologies and concepts I mentioned, but it does cover the references I cited in the talk.