Alex James asks “Would it be useful to add more metadata capabilities to the protocol?” in their Queryable OData Metadata post of 4/22/2010:

Today if you browse an OData service there are two ways to learn about it: Service Documents and $metadata.

The question is this: Would it be useful to add more metadata capabilities to the protocol?

Service Documents

You can look at the 'Atom Service Document', available from the root of the service, which gives you the titles and urls for each of the service's feeds.

That's it though.

The service document doesn't tell you anything about the shape of the entries exposed by that feed or anything about relationships between the feeds.

$metadata

This is where the $metadata comes in. It returns an EDMX document that contains a complete description of the feeds, types, properties, relationships exposed by the service in EDM.

Most OData client libraries use this information to drive the generation of client-side classes to represent server types and aid programmability.

There are some limitations with $metadata though:

It is all or nothing. Lots of metadata means a big document.

It forces the server to prepare metadata for every type in the system, forcing an up front, rather than on demand model upon the service.

It isn't queryable. So if you want to find Types that have an Address property you have to retrieve the whole EDMX and search the xml, yourself.

Queryable $metadata

To address these issues one thing we've been considering - I even have a prototype - is extending $metadata so that if becomes just another OData Service. This time exposing the metadata of the service rather than its data.

You could think of this as Reflection for OData.

Alex continues with detailed query examples and sample result documents.

Features:

Run Multiple SQL StatementsYou can run the same statement multiple times, and specify a wait time between each execution Run inline SQL or RPC CallsYou can choose to run a SQL statement or make an RPC call for stored procedures. Run statements in parallelYou can also run the same statement on multiple threads, simulating call concurrency. When running multiple threads you can also request to start each thread at slightly different times to avoid hitting the database server all at once. Run pre, between and post StatementsYou can run a prelimiary command before the test begins, run a command after each individual test and one at the very end once all tests have been run. View Performance ResultsYou can view performance results in three ways:

By looking at the last test's results

By looking at multiple tests in a comparative window

By exporting the performance metrics into Excel for further analysis

View ErrorsIf errors are detected during the test execution, such as a database timeout or an invalid SQL statement, errors will be captured and made available for review in a window. You can also decide to abort a test if too many errors have been captured.

The project is licensed as open source under a Microsoft Public License (MS-PL).

I am building an application to take performance metrics in both SQL Azure and against a local SQL Server... and I had a small bug in there... which tried to issue over 2000 connection requests in SQL Azure within a few seconds... no need to say SQL Azure blocked that quickly. :)

I’m happy to announce that the WF4 Activity Pack CTP1 is now available for download. This release consists of two different activities, database activities for interacting with a database and the much anticipated State Machine for authoring event driven workflows. Here are a few key links:

The goal for this release is to provide you with an early look at some activities that we think are important and to start a conversation around the features and capability for these activities. Check these out, send us some feedback, and enjoy!

Windows Server AppFabric has extensive monitoring capabilities, which go beyond simply writing events to the server event log. In this video, Michael McKeown explains how you can configure the monitoring level and scope for your workflows and services deployed in IIS with AppFabric.

As part of the Real World Windows Azure series, we talked to Matthew Davey, Founder of TicketDirect, about using the Windows Azure platform to run the company's ticketing system. Here's what he had to say:

MSDN: What service does TicketDirect provide?

Davey: We provide online and on-premises ticketing services for 80 venues in New Zealand and Australia. We've really grown from a small rugby-specific ticketing business to a company that was responsible for 45 percent of professionally ticketed event sales in New Zealand in the first half of 2009.

MSDN: What was the biggest challenge TicketDirect faced prior to adopting the Windows Azure platform?

Davey: The problem in the ticketing business is that we have highly variable load patterns, so we have highly elastic needs. We can sell a few hundred tickets an hour for most of the week, but when a big event goes on sale at 9:00 in the morning, we get an enormous spike in load against our application. It's a difficult issue to resolve because to handle peak loads, we'd have to invest so much in our server infrastructure that it just becomes economically infeasible. Instead, we handle server demand spikes by limiting the number of requests passed on to the server. This keeps server demand manageable but requires sacrificing some of the user experience.

MSDN: Can you describe the solution you built with the Windows Azure platform to help you manage peak demand in an economical way?

Davey: We are in the process of migrating our existing application, which is built on the Microsoft Visual Basic 6 development system and runs against Microsoft SQL Server data management software. In our first phase, we're taking advantage of the scalability in Microsoft SQL Azure to improve the speed of ticket sales. We can spin up hundreds of SQL Azure databases during peak times, and then we switch them off when we no longer need them-and we only pay for what we use. For the second phase, we're working on a new browser-based sales application that incorporates the Microsoft Silverlight 4 browser plug-in technology. In the final phase of migration, we will rewrite the existing Visual Basic application as a Windows Azure application.

One change you may have noticed in the latest operating system release in Windows Azure is that the dynamic compression module has been turned on in IIS. This means that without doing anything, you should now see the default dynamic compression settings take effect.

Changing the Defaults

Compression settings are primarily controlled by two configuration elements: <urlCompression> and <httpCompression>.

<urlCompression> can be configured at the application level in web.config, and it lets you turn on and off dynamic and static compression. By default, dynamic compression is turned off, so you may want to add the following line to your web.config file (but see the word of caution at the end of this post):

<httpCompression> can only be configured at the level of applicationHost.config, so with today’s web role, you’ll get configuration that looks like the following (though the directory attribute will be different):

Unfortunately, you can’t change applicationHost.config settings in today’s Windows Azure web role. However, you can edit this section in applicationHost.config using the Hosted Web Core Worker Role project.

A Word of Caution

Compression settings are tricky, and adding more compression will not necessarily increase the performance of your application. (Sometimes it will do the exact opposite!) Be sure to do your research first, and then test your new settings to make sure they’re having the effect you expected.

… Microsoft defines Windows Azure as "… a cloud services operating system that serves as the development, service hosting and service management environment for the Windows Azure platform. Windows Azure provides developers with on-demand compute and storage to host, scale and manage web applications on the internet through Microsoft datacenters" (see http://www.microsoft.com/windowsazure/windowsazure).

Any application running in Windows Azure can have either of these roles: Web role or Worker role. The former is typically implemented using ASP.NET running in the context of IIS; the latter is a batch job that receives input through Windows Azure storage. Note that each of these roles executes in its own Windows virtual machine.
The goal of Windows Azure platform is to provide a foundation for running Windows applications in the cloud. Windows Azure runs on systems in Microsoft data centers. In essence, Windows Azure is a service that you can use to run applications and store data on Internet-accessible systems. Windows Azure platform supports languages that are targeted at the .NET CLR—for example, C# and VB.NET—as well as Java, Ruby, PHP, and Python. …

Lori MacVittie doesn’t believe that “authorized business users will be able to tap that computing power without a lot of know-how” in her the The Cloud of Damocles post of 4/22/2010:

…with clouds, the business user can become king. Creating a private cloud will take considerable IT skill, but once one is built, authorized business users will be able to tap that computing power without a lot of know-how.

Really? I’ve worked in a lot of places, including enterprises. Maybe your enterprise is different, maybe your business users are savvier than ones with which I’ve worked, but I just don’t see this happening on a regular basis. Business users aren’t likely to be tapping into anything except the extra hours IT suddenly has on their hands because they’ve been freed from the tedious tasks of deploying and configuring servers. Business users define requirements, they perform user-acceptance testing, they set the service-level parameters for what’s acceptable performance and availability for the applications they’ve commissioned (paid for) to be developed and/or deployed for business purposes.

But they don’t push buttons and deploy applications, nor do they configure them, nor do they want to perform those tasks. If they did, they’d be in – wait for it, wait for it – IT.

But Lori, what about SaaS (Software as a Service)? That’s cloud computing. Business users tap into that, don’t they? No, no they don’t. They tap into the software – that’s why it’s called Softwareas a Service and not Cloud Computing as a Service. The SaaS model also requires, necessarily, that the business processes and functions of the software being offered are highly commoditized across a wide variety of industries or at a minimum can be easily configured to support varying workflow processes. CRM. SFA. E-mail. Document management. HR. Payroll. These types of applications are sufficiently consistent in data schemas, workflows, and terminology across industries to make them a viable SaaS solution. Other applications? Likely not simply because they require much more customization and integration; work that isn’t going to be accomplished by business users – not at the implementation level, at least.

Packaging up an application into a virtual machine or deploying it as SaaS and making it available for self-service provisioning via an external or internal cloud does not eliminate the need for integration, upgrades, patches, configuration, and performance tuning. The cloud is not a magical land in which applications execute flawlessly or integrate themselves. That means someone - and it ain’t gonna be a business user - is going to have to take care of that application.

The battle of the Cloud Frameworks has started, and it will look a lot like the battle of the Application Servers which played out over the last decade and a half. Cloud Frameworks (which manage IT automation and runtime outsourcing) are to the Programmable Datacenter what Application Servers are to the individual IT server. In the longer term, these battlefronts may merge, but for now we’ve been transported back in time, to the early days of Web programming. The underlying dynamic is the same. It starts with a disruptive IT event (part new technology, part new mindset). 15 years ago the disruptive event was the Web. Today it’s Cloud Computing.

Stage 1

It always starts with very simple use cases. For the Web, in the mid-nineties, the basic use case was “how do I return HTML that is generated by a script as opposed to a static file”. For Cloud Computing today, it is “how do I programmatically create, launch and stop servers as opposed to having to physically install them”.

In that sense, the IaaS APIs of today are the equivalent of the Common Gateway Interface (CGI) circa 1993/1994. Like the EC2 API and its brethren, CGI was not optimized, not polished, but it met the basic use cases and allowed many developers to write their first Web apps (which we just called “CGI scripts” at the time).

Stage 2

But the limitations became soon apparent. In the CGI case, it had to do with performance (the cost of the “one process per request” approach). Plus, the business potential was becoming clearer and attracted a different breed of contenders than just academic and research institutions. So we got NSAPI, ISAPI, FastCGI, Apache Modules, JServ, ZDAC…

We haven’t reached that stage for Cloud yet. That will be when the IaaS APIs start to support events, enumerations, queries, federated identity etc…

Stage 3

Stage 2 looked like the real deal, when we were in it, but little did we know that we were still just nibbling on the hors d’oeuvres. And it was short-lived. People quickly decided that they wanted more than a way to handle HTTP requests. If the Web was going to be central to most programs, then all aspects of programming had to fit well in the context of the Web. We didn’t want Web servers anymore, we wanted application servers (re-purposing a term that had been used for client-server). It needed more features, covering data access, encapsulation, UI frameworks, integration, sessions. It also needed to meet non-functional requirements: availability, scalability (hello clustering), management, identity…

That turned into the battle between the various Java application servers as well as between Java and Microsoft (with .Net coming along), along with other technology stacks. That’s where things got really interesting too, because we explored different ways to attack the problem. People could still program at the HTTP request level. They could use MVC framework, ColdFusion/ASP/JSP/PHP-style markup-driven applications, or portals and other higher-level modular authoring frameworks. They got access to adapters, message buses, process flows and other asynchronous mechanisms. It became clear that there was not just one way to write Web applications. And the discovery is still going on, as illustrated by the later emergence of Ruby on Rails and similar frameworks.

Stage 4

Stage 3 is not over for Web applications, but stage 4 is already there, as illustrated by the fact that some of the gurus of stage 3 have jumped to stage 4. It’s when the Web is everywhere. Clients are everywhere and so are servers for that matter. The distinction blurs. We’re just starting to figure out the applications that will define this stage, and the frameworks that will best support them. The game is far from over. …

William continues with “So what does it mean for Cloud Frameworks?,” “It’s early,” “No need to rush standards,” “Winners and losers,” “New Roles,” “It’s the stack” and concludes with an “Integration” topic:

Integration

… If indeed we can go by the history of Application Server to predict the future of Cloud Frameworks, then we’ll have a few stacks (with different levels of completeness, standardized or proprietary). This is what happened for Web development (the JEE stack, the .Net stack, a more loosely-defined alternative stack which is mostly open-source, niche stacks like the backend offered by Adobe for Flash apps, etc) and at some point the effort moved from focusing on standardizing the different application environment technology alternatives (e.g. J2EE) towards standardizing how the different platforms can interoperate (e.g. WS-*). I expect the same thing for Cloud Frameworks, especially as they grow out of stages 1 and 2 and embrace what we call today PaaS. At which point the two battlefields (Application Servers and Cloud Frameworks) will merge. And when this happens, I just can’t picture how one stack/framework will suffice for all. So we’ll have to define meaningful integration between them and make them work.

If you’re a spectator, grab plenty of popcorn. If you’re a soldier in this battle, get ready for a long campaign.

Stephanie Overby asks “Standard cloud computing contracts are one-sided documents that impose responsibility for security and data protection on the customer, disclaim all liability, offer no warranties, and give the vendor the right to suspend service at will. So why would you bother to sign on the dotted line?” as a preface to her How to Negotiate a Better Cloud Computing Contract post to CIO.com of 4/21/2010:

The typical cloud computing contract can look downright simple to an experienced IT outsourcing customer accustomed to inking pacts hundreds of pages long that outline service levels and penalties, pricing and benchmarks, processes and procedures, security and business continuity requirements, and clauses delineating the rights and responsibilities of the IT services supplier and customer.

And that simplicity, say IT outsourcing experts, is the problem with cloud computing.

"Failure to understand the true meaning of the cloud and to address the serious legal and contractual issues associated with cloud computing can be catastrophic," says Daniel Masur, a partner in the Washington, D.C. office of law firm Mayer Brown. "The data security issues are particularly challenging, and failure to address them in the contract can expose a customer to serious violations of applicable privacy laws."

If a cloud services contract (whether it's for software-, infrastructure- or platform-as a service) seems less complex, that's because it's designed to offer products and services "as is"—without any vendor representations or warranties, responsibility for adequate security or data protection, or liability for damages, says Masur. (See Cloud-Computing Services: "Fine Print" Disappointment Forecasted.)

Cloud service providers will tell you the simplicity is precisely the point. They can offer customers low-cost, instantly available, pay-per-use options for everything from infrastructure on-demand to desktop support to business applications only by pooling resources and putting the onus for issues like data location or disaster recovery on the client. Adding more robust contractual protections erodes their value proposition.

"It is reasonable for vendors, particularly those who provide both traditional and cloud-type services, to point out that the further they are getting away from standard contracts—and, by implication, standard services—the more difficult it is for them to close the business case," says Doug Plotkin, head of U.S. sourcing for PA Consulting Group. "Much of the economic benefit that the cloud can deliver is predicated on the services—and the agreements—being standard."

Thus, the average cloud contract on the street is a one-sided document with little room for customer-specific protection or customization, says Masur. The question for new cloud computing customers is, Should you sign on that dotted line?

And the frustrating answer is, Sometimes.

"More robust contractual protection may or may not be the correct answer," says Masur. "It depends."

Stephanie continues with “When to Negotiate a Better Cloud Services Contract” advice. Frankly, I doubt if most prospective Windows Azure and SQL Azure customers will have the clout to negotiate terms with Microsoft.

@HOME WITH WINDOWS AZURE

I’m really excited to announce a project my colleagues Jim, John and I have been working on. We wanted to come up with a project that would: 1) be fun for users to learn Azure, 2) help illustrate scale, 3) do something useful, and 4) be fun to develop (from our end).

I think we got it! Here is a rundown:

Elevate your skills with Windows Azure in this hands-on workshop! In this event we’ll guide you through the process of building and deploying a large scale Azure application. Forget about “hello world”! In less than two hours we’ll build and deploy a real cloud app that leverages the Azure data center and helps make a difference in the world. Yes, in addition to building an application that will leave you with a rock-solid understanding of the Azure platform, the solution you deploy will contribute back to Stanford’s Folding@home distributed computing project. There’s no cost to you to participate in this session; each attendee will receive a temporary, self-expiring, full-access account to work with Azure for a period of 2-weeks.

Receive a temporary, self-expiring full-access account to work with Azure for a period of 2-weeks at no cost - accounts will be emailed to all registered attendees 24-48 hours in advance of each event.

Build and deploy a real cloud app that leverages the Azure data center

Who should attend?

Open to developers with an interest in exploring Windows Azure through a short, hands-on workshop. …

Check Brian’s post for the day’s agenda.

*PREREQUISITES

The prerequisites are pretty straight forward and we ask that you come prepared to participate in this event by installing the required software in advance of the Live Meeting event.

First of all: thank you for attending the sessions Kevin Dockx and I gave at TechDays 2010 Portugal! A wonder we made it there with all the ash clouds and volcanic interference based in Iceland.

Just Another Wordpress Weblog, But More Cloudy

Abstract: “While working together with Microsoft on the Windows Azure SDK for PHP, we found that we needed an popular example application hosted on Microsoft’s Windows Azure. Wordpress was an obvious choice, but not an obvious task. Learn more about Windows Azure, the PHP SDK that we developed, SQL Azure and about the problems we faced porting an existing PHP application to Windows Azure.”

A few weeks back there was a Windows Azure Firestarter event in Redmond, which I had the pleasure of speaking at. If you are after a short look at the platform, then this is a great place to start, with speakers such as Brad Calder and David Robinson. Unfortunately it doesn’t look like the lap around Steve Marx did is actually available yet, but I’ll update this post when it is.

The videos are now available for consumption on Channel 9. You can watch or click to the videos below.

Tomorrow (Friday 23/4/2010) I am delivering a session at the Cloud Grid Exchange in London at SkillsMatter (A top training company and superb supporter of development communities).

To be perfectly honest – I’m more interested in attending than presenting as the sessions and speaker line up look great. But in the middle of all that I will be doing the following (rather cheekily named) session:

Looking at the Clouds through dirty Windows

Many developers assume that the Microsoft Windows Azure Platform for Cloud Computing is only relevant if you develop solutions using Microsoft Visual Studio and the .NET Framework. The reality is somewhat different. In the same way that developers can build great applications on Windows Server using a variety of programming languages, developers can do the same for Azure. Java, Tomcat, PHP, Ruby, Python, MySQL and more all work great on Azure. In this session we will take a lap around the services offered by the Azure PaaS and demonstrate just how easy it is to build and deploy applications built in .NET and other technologies.

The session will be a mix of slides and demos – currently I plan to demo .NET and Ruby on Rails running on Azure – but I may flex that depending on how the morning sessions go and who turns up.

We’re getting very close to the 2nd ALT.NET Open Spaces Conference in Houston. This year we’re doing it a tad differently, and we’re holding 2 half-day workshops on Friday, April 30th.

Deep Dive into Windows Azure

Scott (who recently re-joined Microsoft) will go from soup-to-nuts and show you how to build applications on Windows Azure. He has a ton of material and it will be a very hands-on workshop. If you have any plans (or even qualms) about building applications in the cloud, you should attend this workshop.

Windows Azure is a key part of Microsoft’s “Cloud” strategy moving into the future but of course, it needs people to use it and develop for it for it to be truly successful. They are piloting a new way of training developers & architects Azure, via self paced, web based training…best of all it’s FREE!

The method is one that I’m quite familiar with which aims to offer the best features of classroom training without the hassles and expense of travel, hotels, being out of the office for days etc. It utilises:

Interactive Live Meeting sessions with a tutor

On-line videos

Hands on Labs

E-Learning

Weekly Assessments

to cover off the topics, and you don’t need to go anywhere! The course lasts for 6 weeks from:

This is aimed at developers, architects, programmers and system designers and recommends at least 6 months experience programming in .NET and Visual Studio. It will take around 4 to 5 hours a week to research and complete the tasks and there are timelines etc for submitting the work. However, successful completion gets you a “Microsoft Certificate of Completion”

This is a new approach from Microsoft and one that I hope will be expanded out to other product areas.

Register:

If you’re technically minded and interested in Azure, sign up…and get any colleagues/friends that would be interested to sign up too! I’ve registered and am looking forward to it so hopefully I’ll see you there

Today, we got the chance to sit down with Aprimo, an on-demand marketing automation company that has built their software business around scaling their own cloud infrastructure with VMware vCenter. Aprimo has optimized its offerings to scale with customer growth and leverage best-in-class hardware to match innovation in the software layers it develops.

In this discussion, we found less need for discussing private vs. public cloud. Instead, we found more focus on performance and speed-to-market as key drivers for moving a virtualization strategy into personal cloud infrastructure reality.

The story of Aprimo starts with virtualization - and has led to the company defining the boundaries of its cloud offering and product architecture around the benefits of scaling resources on demand.

Aprimo uses a Microsoft .Net three-tier architecture with MSSQL in the back-end. All of the three tiers (front-end, business logic, database) run in virtual containers that are monitored with vCenter.

Performance is the question that Aprimo studied when bringing vendors on board. The company has relationships with EMC, Cisco, and HP for the three key parts of the technology stack.

vCenter joins these offerings together and offers the company quick response to new customer requests. Like many business, marketing can come in waves and this architecture is designed to scale around the unknown and to be agile enough to support the marketing calendar.

Here is a diagram showing the core services VMware vCenter is focused on:

We had the chance to explore the customer experience of build-your-own-cloud with John Gilmartin, Director of Product Marketing at VMware. We asked him if VMware sells clouds, or if instead its tool build clouds.

What we found is that it is a bit of both. Like a data center itself, or a complex application, building your own cloud can be a multi-faceted event. Customers are using vCenter as a building block to manage the resources and enabling automation around business processes.

It was great to be back at Under The Radar this year. I wrote about disruptive cloud computing start-ups that I saw at Under The Radar last year. Since then the cloud computing has gained significant momentum. This was evident from talking to the entrepreneurs who pitched their start-ups this year. At the conference there was no discussion on what is cloud computing and why anyone should use it. It was all about how and not why. We have crossed the chasm. The companies who presented want to solve the “cloud scale” problems as it relates to database, infrastructure, development, management etc. This year, I have decided to break down my impressions into more than one post.
NoSQL has seen staggering innovation in the last year. Here are the two companies in the NoSQL category that I liked at Under The Radar:

Northscale was in stealth mode for a while and officially launched four weeks back. Their product is essentially a commercial version of memcached that sits in front of an RDBMS to help customers deal with the scaling bottlenecks of a typical large RDBMS deployment. This is not a unique concept – the developers have been using memcached for a while for horizontal cloud-like scaling. However it is an interesting offering that attempts to productize an open source component. Cloudera has achieved a reasonable success with commercializing Hadoop. It is good to see more companies believing in open source business model. They have another product called membase, which is a replicated persistence store for memcached – yes, a persistence layer on top of a persistence layer. This is designed to provide eventual consistency with tunable blocking and non-blocking I/Os. Northscale has signed up Heroku and Zynga as customers and they are already making money.

As more and more deployments face the scaling issues, Northscale does have an interesting value proposition to help customers with their scaling pain by selling them an aspirin or vicodin. Northscale won the best in category award. Check out their pitch and the Q&A [here].

GenieDB is a UK-based start-up that offers a product, which allows the developers to use mySQL as a relational database as well as a key-value store. It has support for replication with immediate consistency. Few weeks back I wrote a post - NoSQL is not SQL and that’s a problem. GenieDB seems to solve that problem to some extent. Much of the transactional enterprise software still runs on an RDBMS and depends on the data being immediately consistent. The enterprise software can certainly leverage the key-value stores for certain features where RDBMS is simply an overhead. However using a key-value store that is not part of the same logical data source is an impediment in many different ways. The developers want to access data from single logical system. GenieDB allows table joins between SQL and NoSQL stores. I also like their vertical approach of targeting specific popular platforms on top of mySQL such as Wordpress and Drupal. They have plans to support Rails by supporting ActiveRecord natively on their platform. This is a vitamin, if sold well, [that] has significant potential.

They didn’t win any prize at the conference. I believe it wasn't about not having a good product but they failed to convey the magnitude of the problem that they could help solve in their pitch. My advice to them would be to dial up their marketing, hone the value proposition, and set up the business development and operations in the US. On a side note the founder and the CEO Dr. Jack Kreindler is a “real” doctor. He is a physician who paid his way through the medical school by building healthcare IT systems. Way to go doc! Check out their pitch and the Q&A [here].

Alex James asks “Would it be useful to add more metadata capabilities to the protocol?” in their Queryable OData Metadata post of 4/22/2010:

Today if you browse an OData service there are two ways to learn about it: Service Documents and $metadata.

The question is this: Would it be useful to add more metadata capabilities to the protocol?

Service Documents

You can look at the 'Atom Service Document', available from the root of the service, which gives you the titles and urls for each of the service's feeds.

That's it though.

The service document doesn't tell you anything about the shape of the entries exposed by that feed or anything about relationships between the feeds.

$metadata

This is where the $metadata comes in. It returns an EDMX document that contains a complete description of the feeds, types, properties, relationships exposed by the service in EDM.

Most OData client libraries use this information to drive the generation of client-side classes to represent server types and aid programmability.

There are some limitations with $metadata though:

It is all or nothing. Lots of metadata means a big document.

It forces the server to prepare metadata for every type in the system, forcing an up front, rather than on demand model upon the service.

It isn't queryable. So if you want to find Types that have an Address property you have to retrieve the whole EDMX and search the xml, yourself.

Queryable $metadata

To address these issues one thing we've been considering - I even have a prototype - is extending $metadata so that if becomes just another OData Service. This time exposing the metadata of the service rather than its data.

You could think of this as Reflection for OData.

Alex continues with detailed query examples and sample result documents.

Features:

Run Multiple SQL StatementsYou can run the same statement multiple times, and specify a wait time between each execution Run inline SQL or RPC CallsYou can choose to run a SQL statement or make an RPC call for stored procedures. Run statements in parallelYou can also run the same statement on multiple threads, simulating call concurrency. When running multiple threads you can also request to start each thread at slightly different times to avoid hitting the database server all at once. Run pre, between and post StatementsYou can run a prelimiary command before the test begins, run a command after each individual test and one at the very end once all tests have been run. View Performance ResultsYou can view performance results in three ways:

By looking at the last test's results

By looking at multiple tests in a comparative window

By exporting the performance metrics into Excel for further analysis

View ErrorsIf errors are detected during the test execution, such as a database timeout or an invalid SQL statement, errors will be captured and made available for review in a window. You can also decide to abort a test if too many errors have been captured.

The project is licensed as open source under a Microsoft Public License (MS-PL).

I am building an application to take performance metrics in both SQL Azure and against a local SQL Server... and I had a small bug in there... which tried to issue over 2000 connection requests in SQL Azure within a few seconds... no need to say SQL Azure blocked that quickly. :)

I’m happy to announce that the WF4 Activity Pack CTP1 is now available for download. This release consists of two different activities, database activities for interacting with a database and the much anticipated State Machine for authoring event driven workflows. Here are a few key links:

The goal for this release is to provide you with an early look at some activities that we think are important and to start a conversation around the features and capability for these activities. Check these out, send us some feedback, and enjoy!

Windows Server AppFabric has extensive monitoring capabilities, which go beyond simply writing events to the server event log. In this video, Michael McKeown explains how you can configure the monitoring level and scope for your workflows and services deployed in IIS with AppFabric.

As part of the Real World Windows Azure series, we talked to Matthew Davey, Founder of TicketDirect, about using the Windows Azure platform to run the company's ticketing system. Here's what he had to say:

MSDN: What service does TicketDirect provide?

Davey: We provide online and on-premises ticketing services for 80 venues in New Zealand and Australia. We've really grown from a small rugby-specific ticketing business to a company that was responsible for 45 percent of professionally ticketed event sales in New Zealand in the first half of 2009.

MSDN: What was the biggest challenge TicketDirect faced prior to adopting the Windows Azure platform?

Davey: The problem in the ticketing business is that we have highly variable load patterns, so we have highly elastic needs. We can sell a few hundred tickets an hour for most of the week, but when a big event goes on sale at 9:00 in the morning, we get an enormous spike in load against our application. It's a difficult issue to resolve because to handle peak loads, we'd have to invest so much in our server infrastructure that it just becomes economically infeasible. Instead, we handle server demand spikes by limiting the number of requests passed on to the server. This keeps server demand manageable but requires sacrificing some of the user experience.

MSDN: Can you describe the solution you built with the Windows Azure platform to help you manage peak demand in an economical way?

Davey: We are in the process of migrating our existing application, which is built on the Microsoft Visual Basic 6 development system and runs against Microsoft SQL Server data management software. In our first phase, we're taking advantage of the scalability in Microsoft SQL Azure to improve the speed of ticket sales. We can spin up hundreds of SQL Azure databases during peak times, and then we switch them off when we no longer need them-and we only pay for what we use. For the second phase, we're working on a new browser-based sales application that incorporates the Microsoft Silverlight 4 browser plug-in technology. In the final phase of migration, we will rewrite the existing Visual Basic application as a Windows Azure application.

One change you may have noticed in the latest operating system release in Windows Azure is that the dynamic compression module has been turned on in IIS. This means that without doing anything, you should now see the default dynamic compression settings take effect.

Changing the Defaults

Compression settings are primarily controlled by two configuration elements: <urlCompression> and <httpCompression>.

<urlCompression> can be configured at the application level in web.config, and it lets you turn on and off dynamic and static compression. By default, dynamic compression is turned off, so you may want to add the following line to your web.config file (but see the word of caution at the end of this post):

<httpCompression> can only be configured at the level of applicationHost.config, so with today’s web role, you’ll get configuration that looks like the following (though the directory attribute will be different):

Unfortunately, you can’t change applicationHost.config settings in today’s Windows Azure web role. However, you can edit this section in applicationHost.config using the Hosted Web Core Worker Role project.

A Word of Caution

Compression settings are tricky, and adding more compression will not necessarily increase the performance of your application. (Sometimes it will do the exact opposite!) Be sure to do your research first, and then test your new settings to make sure they’re having the effect you expected.

… Microsoft defines Windows Azure as "… a cloud services operating system that serves as the development, service hosting and service management environment for the Windows Azure platform. Windows Azure provides developers with on-demand compute and storage to host, scale and manage web applications on the internet through Microsoft datacenters" (see http://www.microsoft.com/windowsazure/windowsazure).

Any application running in Windows Azure can have either of these roles: Web role or Worker role. The former is typically implemented using ASP.NET running in the context of IIS; the latter is a batch job that receives input through Windows Azure storage. Note that each of these roles executes in its own Windows virtual machine.
The goal of Windows Azure platform is to provide a foundation for running Windows applications in the cloud. Windows Azure runs on systems in Microsoft data centers. In essence, Windows Azure is a service that you can use to run applications and store data on Internet-accessible systems. Windows Azure platform supports languages that are targeted at the .NET CLR—for example, C# and VB.NET—as well as Java, Ruby, PHP, and Python. …

Lori MacVittie doesn’t believe that “authorized business users will be able to tap that computing power without a lot of know-how” in her the The Cloud of Damocles post of 4/22/2010:

…with clouds, the business user can become king. Creating a private cloud will take considerable IT skill, but once one is built, authorized business users will be able to tap that computing power without a lot of know-how.

Really? I’ve worked in a lot of places, including enterprises. Maybe your enterprise is different, maybe your business users are savvier than ones with which I’ve worked, but I just don’t see this happening on a regular basis. Business users aren’t likely to be tapping into anything except the extra hours IT suddenly has on their hands because they’ve been freed from the tedious tasks of deploying and configuring servers. Business users define requirements, they perform user-acceptance testing, they set the service-level parameters for what’s acceptable performance and availability for the applications they’ve commissioned (paid for) to be developed and/or deployed for business purposes.

But they don’t push buttons and deploy applications, nor do they configure them, nor do they want to perform those tasks. If they did, they’d be in – wait for it, wait for it – IT.

But Lori, what about SaaS (Software as a Service)? That’s cloud computing. Business users tap into that, don’t they? No, no they don’t. They tap into the software – that’s why it’s called Softwareas a Service and not Cloud Computing as a Service. The SaaS model also requires, necessarily, that the business processes and functions of the software being offered are highly commoditized across a wide variety of industries or at a minimum can be easily configured to support varying workflow processes. CRM. SFA. E-mail. Document management. HR. Payroll. These types of applications are sufficiently consistent in data schemas, workflows, and terminology across industries to make them a viable SaaS solution. Other applications? Likely not simply because they require much more customization and integration; work that isn’t going to be accomplished by business users – not at the implementation level, at least.

Packaging up an application into a virtual machine or deploying it as SaaS and making it available for self-service provisioning via an external or internal cloud does not eliminate the need for integration, upgrades, patches, configuration, and performance tuning. The cloud is not a magical land in which applications execute flawlessly or integrate themselves. That means someone - and it ain’t gonna be a business user - is going to have to take care of that application.

The battle of the Cloud Frameworks has started, and it will look a lot like the battle of the Application Servers which played out over the last decade and a half. Cloud Frameworks (which manage IT automation and runtime outsourcing) are to the Programmable Datacenter what Application Servers are to the individual IT server. In the longer term, these battlefronts may merge, but for now we’ve been transported back in time, to the early days of Web programming. The underlying dynamic is the same. It starts with a disruptive IT event (part new technology, part new mindset). 15 years ago the disruptive event was the Web. Today it’s Cloud Computing.

Stage 1

It always starts with very simple use cases. For the Web, in the mid-nineties, the basic use case was “how do I return HTML that is generated by a script as opposed to a static file”. For Cloud Computing today, it is “how do I programmatically create, launch and stop servers as opposed to having to physically install them”.

In that sense, the IaaS APIs of today are the equivalent of the Common Gateway Interface (CGI) circa 1993/1994. Like the EC2 API and its brethren, CGI was not optimized, not polished, but it met the basic use cases and allowed many developers to write their first Web apps (which we just called “CGI scripts” at the time).

Stage 2

But the limitations became soon apparent. In the CGI case, it had to do with performance (the cost of the “one process per request” approach). Plus, the business potential was becoming clearer and attracted a different breed of contenders than just academic and research institutions. So we got NSAPI, ISAPI, FastCGI, Apache Modules, JServ, ZDAC…

We haven’t reached that stage for Cloud yet. That will be when the IaaS APIs start to support events, enumerations, queries, federated identity etc…

Stage 3

Stage 2 looked like the real deal, when we were in it, but little did we know that we were still just nibbling on the hors d’oeuvres. And it was short-lived. People quickly decided that they wanted more than a way to handle HTTP requests. If the Web was going to be central to most programs, then all aspects of programming had to fit well in the context of the Web. We didn’t want Web servers anymore, we wanted application servers (re-purposing a term that had been used for client-server). It needed more features, covering data access, encapsulation, UI frameworks, integration, sessions. It also needed to meet non-functional requirements: availability, scalability (hello clustering), management, identity…

That turned into the battle between the various Java application servers as well as between Java and Microsoft (with .Net coming along), along with other technology stacks. That’s where things got really interesting too, because we explored different ways to attack the problem. People could still program at the HTTP request level. They could use MVC framework, ColdFusion/ASP/JSP/PHP-style markup-driven applications, or portals and other higher-level modular authoring frameworks. They got access to adapters, message buses, process flows and other asynchronous mechanisms. It became clear that there was not just one way to write Web applications. And the discovery is still going on, as illustrated by the later emergence of Ruby on Rails and similar frameworks.

Stage 4

Stage 3 is not over for Web applications, but stage 4 is already there, as illustrated by the fact that some of the gurus of stage 3 have jumped to stage 4. It’s when the Web is everywhere. Clients are everywhere and so are servers for that matter. The distinction blurs. We’re just starting to figure out the applications that will define this stage, and the frameworks that will best support them. The game is far from over. …

William continues with “So what does it mean for Cloud Frameworks?,” “It’s early,” “No need to rush standards,” “Winners and losers,” “New Roles,” “It’s the stack” and concludes with an “Integration” topic:

Integration

… If indeed we can go by the history of Application Server to predict the future of Cloud Frameworks, then we’ll have a few stacks (with different levels of completeness, standardized or proprietary). This is what happened for Web development (the JEE stack, the .Net stack, a more loosely-defined alternative stack which is mostly open-source, niche stacks like the backend offered by Adobe for Flash apps, etc) and at some point the effort moved from focusing on standardizing the different application environment technology alternatives (e.g. J2EE) towards standardizing how the different platforms can interoperate (e.g. WS-*). I expect the same thing for Cloud Frameworks, especially as they grow out of stages 1 and 2 and embrace what we call today PaaS. At which point the two battlefields (Application Servers and Cloud Frameworks) will merge. And when this happens, I just can’t picture how one stack/framework will suffice for all. So we’ll have to define meaningful integration between them and make them work.

If you’re a spectator, grab plenty of popcorn. If you’re a soldier in this battle, get ready for a long campaign.

Stephanie Overby asks “Standard cloud computing contracts are one-sided documents that impose responsibility for security and data protection on the customer, disclaim all liability, offer no warranties, and give the vendor the right to suspend service at will. So why would you bother to sign on the dotted line?” as a preface to her How to Negotiate a Better Cloud Computing Contract post to CIO.com of 4/21/2010:

The typical cloud computing contract can look downright simple to an experienced IT outsourcing customer accustomed to inking pacts hundreds of pages long that outline service levels and penalties, pricing and benchmarks, processes and procedures, security and business continuity requirements, and clauses delineating the rights and responsibilities of the IT services supplier and customer.

And that simplicity, say IT outsourcing experts, is the problem with cloud computing.

"Failure to understand the true meaning of the cloud and to address the serious legal and contractual issues associated with cloud computing can be catastrophic," says Daniel Masur, a partner in the Washington, D.C. office of law firm Mayer Brown. "The data security issues are particularly challenging, and failure to address them in the contract can expose a customer to serious violations of applicable privacy laws."

If a cloud services contract (whether it's for software-, infrastructure- or platform-as a service) seems less complex, that's because it's designed to offer products and services "as is"—without any vendor representations or warranties, responsibility for adequate security or data protection, or liability for damages, says Masur. (See Cloud-Computing Services: "Fine Print" Disappointment Forecasted.)

Cloud service providers will tell you the simplicity is precisely the point. They can offer customers low-cost, instantly available, pay-per-use options for everything from infrastructure on-demand to desktop support to business applications only by pooling resources and putting the onus for issues like data location or disaster recovery on the client. Adding more robust contractual protections erodes their value proposition.

"It is reasonable for vendors, particularly those who provide both traditional and cloud-type services, to point out that the further they are getting away from standard contracts—and, by implication, standard services—the more difficult it is for them to close the business case," says Doug Plotkin, head of U.S. sourcing for PA Consulting Group. "Much of the economic benefit that the cloud can deliver is predicated on the services—and the agreements—being standard."

Thus, the average cloud contract on the street is a one-sided document with little room for customer-specific protection or customization, says Masur. The question for new cloud computing customers is, Should you sign on that dotted line?

And the frustrating answer is, Sometimes.

"More robust contractual protection may or may not be the correct answer," says Masur. "It depends."

Stephanie continues with “When to Negotiate a Better Cloud Services Contract” advice. Frankly, I doubt if most prospective Windows Azure and SQL Azure customers will have the clout to negotiate terms with Microsoft.

@HOME WITH WINDOWS AZURE

I’m really excited to announce a project my colleagues Jim, John and I have been working on. We wanted to come up with a project that would: 1) be fun for users to learn Azure, 2) help illustrate scale, 3) do something useful, and 4) be fun to develop (from our end).

I think we got it! Here is a rundown:

Elevate your skills with Windows Azure in this hands-on workshop! In this event we’ll guide you through the process of building and deploying a large scale Azure application. Forget about “hello world”! In less than two hours we’ll build and deploy a real cloud app that leverages the Azure data center and helps make a difference in the world. Yes, in addition to building an application that will leave you with a rock-solid understanding of the Azure platform, the solution you deploy will contribute back to Stanford’s Folding@home distributed computing project. There’s no cost to you to participate in this session; each attendee will receive a temporary, self-expiring, full-access account to work with Azure for a period of 2-weeks.

Receive a temporary, self-expiring full-access account to work with Azure for a period of 2-weeks at no cost - accounts will be emailed to all registered attendees 24-48 hours in advance of each event.

Build and deploy a real cloud app that leverages the Azure data center

Who should attend?

Open to developers with an interest in exploring Windows Azure through a short, hands-on workshop. …

Check Brian’s post for the day’s agenda.

*PREREQUISITES

The prerequisites are pretty straight forward and we ask that you come prepared to participate in this event by installing the required software in advance of the Live Meeting event.

First of all: thank you for attending the sessions Kevin Dockx and I gave at TechDays 2010 Portugal! A wonder we made it there with all the ash clouds and volcanic interference based in Iceland.

Just Another Wordpress Weblog, But More Cloudy

Abstract: “While working together with Microsoft on the Windows Azure SDK for PHP, we found that we needed an popular example application hosted on Microsoft’s Windows Azure. Wordpress was an obvious choice, but not an obvious task. Learn more about Windows Azure, the PHP SDK that we developed, SQL Azure and about the problems we faced porting an existing PHP application to Windows Azure.”

A few weeks back there was a Windows Azure Firestarter event in Redmond, which I had the pleasure of speaking at. If you are after a short look at the platform, then this is a great place to start, with speakers such as Brad Calder and David Robinson. Unfortunately it doesn’t look like the lap around Steve Marx did is actually available yet, but I’ll update this post when it is.

The videos are now available for consumption on Channel 9. You can watch or click to the videos below.

Tomorrow (Friday 23/4/2010) I am delivering a session at the Cloud Grid Exchange in London at SkillsMatter (A top training company and superb supporter of development communities).

To be perfectly honest – I’m more interested in attending than presenting as the sessions and speaker line up look great. But in the middle of all that I will be doing the following (rather cheekily named) session:

Looking at the Clouds through dirty Windows

Many developers assume that the Microsoft Windows Azure Platform for Cloud Computing is only relevant if you develop solutions using Microsoft Visual Studio and the .NET Framework. The reality is somewhat different. In the same way that developers can build great applications on Windows Server using a variety of programming languages, developers can do the same for Azure. Java, Tomcat, PHP, Ruby, Python, MySQL and more all work great on Azure. In this session we will take a lap around the services offered by the Azure PaaS and demonstrate just how easy it is to build and deploy applications built in .NET and other technologies.

The session will be a mix of slides and demos – currently I plan to demo .NET and Ruby on Rails running on Azure – but I may flex that depending on how the morning sessions go and who turns up.

We’re getting very close to the 2nd ALT.NET Open Spaces Conference in Houston. This year we’re doing it a tad differently, and we’re holding 2 half-day workshops on Friday, April 30th.

Deep Dive into Windows Azure

Scott (who recently re-joined Microsoft) will go from soup-to-nuts and show you how to build applications on Windows Azure. He has a ton of material and it will be a very hands-on workshop. If you have any plans (or even qualms) about building applications in the cloud, you should attend this workshop.

Windows Azure is a key part of Microsoft’s “Cloud” strategy moving into the future but of course, it needs people to use it and develop for it for it to be truly successful. They are piloting a new way of training developers & architects Azure, via self paced, web based training…best of all it’s FREE!

The method is one that I’m quite familiar with which aims to offer the best features of classroom training without the hassles and expense of travel, hotels, being out of the office for days etc. It utilises:

Interactive Live Meeting sessions with a tutor

On-line videos

Hands on Labs

E-Learning

Weekly Assessments

to cover off the topics, and you don’t need to go anywhere! The course lasts for 6 weeks from:

This is aimed at developers, architects, programmers and system designers and recommends at least 6 months experience programming in .NET and Visual Studio. It will take around 4 to 5 hours a week to research and complete the tasks and there are timelines etc for submitting the work. However, successful completion gets you a “Microsoft Certificate of Completion”

This is a new approach from Microsoft and one that I hope will be expanded out to other product areas.

Register:

If you’re technically minded and interested in Azure, sign up…and get any colleagues/friends that would be interested to sign up too! I’ve registered and am looking forward to it so hopefully I’ll see you there

Today, we got the chance to sit down with Aprimo, an on-demand marketing automation company that has built their software business around scaling their own cloud infrastructure with VMware vCenter. Aprimo has optimized its offerings to scale with customer growth and leverage best-in-class hardware to match innovation in the software layers it develops.

In this discussion, we found less need for discussing private vs. public cloud. Instead, we found more focus on performance and speed-to-market as key drivers for moving a virtualization strategy into personal cloud infrastructure reality.

The story of Aprimo starts with virtualization - and has led to the company defining the boundaries of its cloud offering and product architecture around the benefits of scaling resources on demand.

Aprimo uses a Microsoft .Net three-tier architecture with MSSQL in the back-end. All of the three tiers (front-end, business logic, database) run in virtual containers that are monitored with vCenter.

Performance is the question that Aprimo studied when bringing vendors on board. The company has relationships with EMC, Cisco, and HP for the three key parts of the technology stack.

vCenter joins these offerings together and offers the company quick response to new customer requests. Like many business, marketing can come in waves and this architecture is designed to scale around the unknown and to be agile enough to support the marketing calendar.

Here is a diagram showing the core services VMware vCenter is focused on:

We had the chance to explore the customer experience of build-your-own-cloud with John Gilmartin, Director of Product Marketing at VMware. We asked him if VMware sells clouds, or if instead its tool build clouds.

What we found is that it is a bit of both. Like a data center itself, or a complex application, building your own cloud can be a multi-faceted event. Customers are using vCenter as a building block to manage the resources and enabling automation around business processes.

It was great to be back at Under The Radar this year. I wrote about disruptive cloud computing start-ups that I saw at Under The Radar last year. Since then the cloud computing has gained significant momentum. This was evident from talking to the entrepreneurs who pitched their start-ups this year. At the conference there was no discussion on what is cloud computing and why anyone should use it. It was all about how and not why. We have crossed the chasm. The companies who presented want to solve the “cloud scale” problems as it relates to database, infrastructure, development, management etc. This year, I have decided to break down my impressions into more than one post.
NoSQL has seen staggering innovation in the last year. Here are the two companies in the NoSQL category that I liked at Under The Radar:

Northscale was in stealth mode for a while and officially launched four weeks back. Their product is essentially a commercial version of memcached that sits in front of an RDBMS to help customers deal with the scaling bottlenecks of a typical large RDBMS deployment. This is not a unique concept – the developers have been using memcached for a while for horizontal cloud-like scaling. However it is an interesting offering that attempts to productize an open source component. Cloudera has achieved a reasonable success with commercializing Hadoop. It is good to see more companies believing in open source business model. They have another product called membase, which is a replicated persistence store for memcached – yes, a persistence layer on top of a persistence layer. This is designed to provide eventual consistency with tunable blocking and non-blocking I/Os. Northscale has signed up Heroku and Zynga as customers and they are already making money.

As more and more deployments face the scaling issues, Northscale does have an interesting value proposition to help customers with their scaling pain by selling them an aspirin or vicodin. Northscale won the best in category award. Check out their pitch and the Q&A [here].

GenieDB is a UK-based start-up that offers a product, which allows the developers to use mySQL as a relational database as well as a key-value store. It has support for replication with immediate consistency. Few weeks back I wrote a post - NoSQL is not SQL and that’s a problem. GenieDB seems to solve that problem to some extent. Much of the transactional enterprise software still runs on an RDBMS and depends on the data being immediately consistent. The enterprise software can certainly leverage the key-value stores for certain features where RDBMS is simply an overhead. However using a key-value store that is not part of the same logical data source is an impediment in many different ways. The developers want to access data from single logical system. GenieDB allows table joins between SQL and NoSQL stores. I also like their vertical approach of targeting specific popular platforms on top of mySQL such as Wordpress and Drupal. They have plans to support Rails by supporting ActiveRecord natively on their platform. This is a vitamin, if sold well, [that] has significant potential.

They didn’t win any prize at the conference. I believe it wasn't about not having a good product but they failed to convey the magnitude of the problem that they could help solve in their pitch. My advice to them would be to dial up their marketing, hone the value proposition, and set up the business development and operations in the US. On a side note the founder and the CEO Dr. Jack Kreindler is a “real” doctor. He is a physician who paid his way through the medical school by building healthcare IT systems. Way to go doc! Check out their pitch and the Q&A [here].

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.