AzureDirectory

AzureDirectory smartly uses local file storage to cache files as they are created and automatically pushes them to blob storage as appropriate. Likewise, it smartly caches blob files back to the a client when they change. This provides with a nice blend of just in time syncing of data local to indexers or searchers across multiple machines.

With the flexibility that Lucene provides over data in memory versus storage and the just in time blob transfer that AzureDirectory provides you have great control over the composibility of where data is indexed and how it is consumed. To be more concrete: you can have 1..N Worker roles adding documents to an index, and 1..N searcher Web roles searching over the catalog in near real time.

(Remember that each Worker and Web role incurs individual compute charges of $0.12/hour.)

Thermous continues with with sample code and a reference to “a LINQ to Lucene provider on CodePlex, which allows you to define your schema as a strongly typed object and execute LINQ expressions against the index.”

SQL Azure currently supports 1 GB and 10 GB databases. If you want to store larger amounts of data in SQL Azure you can divide your tables across multiple SQL Azure databases. This article will discuss how to use a middle layer to join two tables on different SQL Azure databases using LINQ. This technique vertically partitions your data in SQL Azure.

In this version of vertically partitioning for SQL Azure we are dividing all the tables in the schema across two or more SQL Azure databases. In choosing which tables to group together on a single database you need to understand how large each of your tables are and their potential future growth – the goal is to evenly distribute the tables so that each database is the same size.

There is also a performance gain to be obtained from partitioning your database. Since SQL Azure spreads your databases across different physical machines, you can get more CPU and RAM resources by partitioning your workload. For example, if you partition your database across 10 - 1 GB SQL Azure databases you get 10X the CPU and memory resources. There is a case study (found here) by TicketDirect, who partitioning their workload across hundreds of SQL Azure databases during peak load.

When partitioning your workload across SQL Azure databases, you lose some of the features of having all the tables in a single database. Some of the considerations when using this technique include:

Foreign keys across databases are not support. In other words, a primary key in a lookup table in one database cannot be referenced by a foreign key in a table on another database. This is a similar restriction to SQL Server’s cross database support for foreign keys.

You cannot have transactions that span databases, even if you are using Microsoft Distributed Transaction Manager on the client side. This means that you cannot rollback an insert on one database, if an insert on another database fails. This restriction can be negated through client side coding – you need to catch exceptions and execute “undo” scripts against the successfully completed statements.

SQLAzureHelper Class

In order to accomplish vertical partitioning we are introduc[ing] the SQLAzureHelperclass, which:

Implements forward read only cursors for performance.

Support[s] IEnumerable and LINQ

Disposes of the connection and the data reader when the result set is no longer needed.

This code has the performance advantage of using forward read only cursors, which means that that data is not fetched from SQL Azure until it is needed for the join.

The result sets return from the two SQL Server databases [for] a join by LINQ [below].

LINQ

LINQ is a set of extensions to the .NET Framework that encompass language-integrated query, set, and transform operations. It extends C# and Visual Basic with native language syntax for queries and provides class libraries to take advantage of these capabilities. You can learn more about LINQ here. This code is using LINQ as client-side query processor to perform the joining and querying of the two result sets.

This code takes the result sets and joins them based on CompanyId, then selects a new class comprised of CompanyName and ColorName.

Connections and SQL Azure

One thing to note is that the code above doesn’t take into account the retry scenario mention in our previous blog post. This has been done to simp[li]fy the example. The retry code needs to go outside of the SQLAzureHelper class to completely re-execute the LINQ query.

In our next blog post we will demonstrate horizontal partitioning using the SQLAzureHelper class.

I’m glad to see the beginning of some concrete advice for SQL Azure database partitioning. However, the forthcoming availability of 50-GB Azure databases will considerably reduce the need for partitioning in departmental-level projects.

Microsoft Codename "Dallas" is a new cloud service that provides a global marketplace for information including data, web services, and analytics. Dallas makes it easy for potential subscribers to locate a dataset that addresses their needs through rich discovery. When they have selected the dataset, Dallas enables information workers to begin analyzing the data and integrating it into their documents, spreadsheets, and databases.

Similarly, developers can write code to consume the datasets on any platform or simply include the automatically created proxy classes. Applications from simple mash-ups to complex data-driven analysis tools are possible with the rich data and services provided. Applications can run on any platform including mobile phones and Web pages. When users begin regularly using data, managers can view usage at any time to predict costs.

Dallas also provides a complete billing infrastructure that scales smoothly from occasional queries to heavy traffic. For subscribers, Dallas becomes even more valuable when there are multiple subscriptions to different datasets: although there may be multiple content providers involved, data access methods, reporting and billing remains consistent.

For content providers, Dallas represents an ideal way to market valuable data and a ready-made solution to e-commerce, billing, and scaling challenges in a multi-tenant environment – providing a global marketplace and integration points into Microsoft’s information worker assets.

Dave Kearns asserts “Data breeches can occur when not enough attention is paid to account and access governance” in a preface to his Revealing the 'cracks' in provisioning post of 5/17/2010:

At the recent European Identity Conference, Cyber-Ark's Shlomi Dinoor (he's vice president of Emerging Technologies) emphasized to me that nothing is ever 100% in IdM. While our topic was "Security and Data Portability in the Cloud" he wanted to remind me that provisioning -- the oldest of IdM services -- was still somewhat problematic. He did this by pointing me to a recent article in Dark Reading: "Database Account-Provisioning Errors A Major Cause Of Breaches."

In the article author Ericka Chickowski points to a recent data breech:

"Take the case of Scott Burgess, 45, and Walter Puckett, 39, a pair of database raiders who were indicted this winter for stealing information from their former employer, Stens Corp. Burgess and Puckett carried out their thievery for up to two years after they left Stens simply by using their old account credentials, which were left unchanged following their departures. Even after accounts were changed, the duo were subsequently able to use different log-in credentials to continue pilfering information."

The problem is that too often we concentrate on the mechanisms of provisioning (and even de-provisioning) without paying enough attention to account and access governance.

But even more problematic can be those accounts that aren't particularly identified with a user.

Phil Lieberman, of Lieberman Software (who was also with me in Munich), says that organizations: "have to ask themselves the question, 'Where do we have accounts? Tell me all of the places where we have accounts, and tell me all the things they use these accounts for.'" He goes on to say: "And the second question is, 'So we're using these accounts -- when were those passwords changed? And if we're using those accounts, what is the ACL [access control list] system we're using, and when was the last time we checked the ACL system?' And finally, 'We have audit logs being generated by these databases -- are we analyzing these audit logs looking for patterns that indicate abuse?'"

Lieberman and Dinoor both represent companies in the "emerging" (in quotes, because the discipline goes back dozens of years, yet it's a hot topic today) Privileged User Management (PUM) space, also called PAM (Privileged Access Management) or PIM (Privileged Identity Management). PUM is the discipline to create, maintain and remove critical accounts (administrator on Windows, root on Unix, the DbA on a database and so on). These accounts represent the "cracks" in provisioning through which data gets breeched. If reading the article noted above gives you pause, you should check out the offerings from Cyber-Ark and Lieberman Software. It might help you sleep better at night.

The MonitorGrid cloud app runs on Azure and is wired with Linxter. Linxter allows for secure, reliable, two-way communication, regardless of the number of intermediary networks involved and regardless of whether or not they are secure.

I’m signing up to compare MonitorGrid with mon.itor.us and Pingdom. You’ll need to follow the instructions form this 00:19:07 Linxter Azure Integration Tutorial video to add the Linxter server features to your Azure project. You can download the Azure deom solution file from the Linxter Developer site.

Paspartu is French for “one size fits all”. Recently I’ve been coming across posts explaining and “promoting” the idea of spawning threads inside a worker role each one of them with a unique work to be done. All are sharing the same idea and all of them are describing the same thing.

The idea

You have some work to do, but you want to do it with the most efficient way, without having underutilized resources, which is one of the benefits of cloud computing anyway.

The implementation

You have a worker process (Worker Role on Windows Azure) which processes some data. Certainly that’s a good implementation but it’s not a best practice. Most of the time, your instance will be underutilized, unless your doing some CPU and memory intensive work and you have a continuous flow of data to be processed.

In another implementation, we created a Master-Slave pattern. A master distributes work to other slave worker roles, roles are picking up their work, do their stuff, return result and start over again. Still, in some cases that’s not the best idea either. Same cons as before. Underutilized resources, high risk of failure. If the master dies, unless properly designed, your system dies. You can’t process any data.

So, another one appeared. Inside a worker role, spawn multiple threads, running their own processes or methods, doing their work and return result. Underutilization is minimized, Thread Pool is doing all the hard work for us and as soon as .NET 4.0 is supported on Windows Azure, parallelization is easy and, allow me to say, mandatory. But what happens if the worker instance dies? Or restarts? Yes, your guess is correct. You lose all threads and all the processing done by that moment, is lost, unless you persist it somehow. If you had multiple instances of your worker role to imitate that behavior, that wouldn’t happen. You’ll only lose data from the instance that died.

As Eugenio Pace says “You have to be prepared to fail” and he’s right. Every single moment, your instance can die, without a single notice and you have to be prepared to deal with it.

Oh, boy.

So really, there is no single solution or best practice. For me, it’s best guidance. Depending on your scenario, one of the solutions above or even a new one, can fit better for you than for others. Every project is unique and has to be treated as such. Try to think out of the box and remember that this is deep water for everyone. It is just some of us swim better..

It looks like I’m only doing sessions lately :-) Here’s another slide deck for a presentation I did on the Architect Forum last week in Belgium.

Abstract: “No, this session is not about greener IT. Learn about using the RoleEnvironment and diagnostics provided by Windows Azure. Communication between roles, logging and automatic upscaling of your application are just some of the possibilities of what you can do if you know about how the Windows Azure environment works.”

With IPL Season 3 occupying the mindshare of cricket fans today, sportsmen are gearing up to put their best foot forward in the cricket arena. In this competitive scenario, technology is expected to play a key role.

Vendors too are looking to cater to this attractive market through a variety of delivery models. The Cloud is a natural fit in this overall strategy. For example, SportingMindz, a Bangalore based organization providing analytical solutions and services to sports organizations, has partnered with Microsoft India for the IPL3 series. The firm has migrated its cricket match analysis product, 22yardz, to the Windows Azure Platform. 22yardz is currently being used by Royal Challengers Bangalore and Kings XI Punjab. [Emphasis added.]

22yardz is a cricket match analysis software designed to analyze the different aspects in a live match scenario giving the detailed statistics along with the strategy of oppositions and player analysis in all departments of the match with seamless integration of videos. The cloud model has helped SportinMindz address pain points such as performance, scalability and availability.

Microsoft Research’s eScience Group is focused on researching ways that information technology (IT) can help solve scientific problems. Dr. Catharine van Ingen, a Partner Architect in Microsoft Research’s eScience Group, talks in this video about how she and others in Microsoft Research have worked with scientists at the University of California, Berkeley and Lawrence Berkeley National Laboratory to address the computing needs in managing Northern California’s Russian River Valley watershed. In this project, Microsoft's Windows Azure cloud-computing platform was used in helping these researchers to manage massive amounts of data in scalable way.

To watch the video and learn more about how Windows Azure was used, click here.

Technology is transforming our ability to measure, monitor and model how the world behaves. This has profound implications for scientific research and can transform the way we tackle global challenges such as health care and climate change. This transformation also will have a huge impact on engineering and business, delivering breakthroughs and discoveries that could lead to new products, new businesses – even new industries.

Today, we’re proud to introduce Microsoft’s Technical Computing initiative, a new effort focused on empowering millions of the world’s smartest problem-solvers. We’ve designed this initiative to bring supercomputing power and resources to a much wider group of the scientists, engineers and analysts who are using modeling and prediction to solve some of the world’s most difficult challenges.

Our goal is to create technical computing solutions that speed discovery, invention and innovation. Soon, complicated tasks such building a sophisticated computer model – which would typically take a team of advanced software programmers months to build and days to run – will be accomplished in an afternoon by a single scientist, engineer or analyst. Rather than grappling with complicated technology, they’ll be able to spend more time on important work.

As part of this initiative we’re also bringing together some of the brightest minds in the technical computing community at www.modelingtheworld.com to discuss the trends, challenges and opportunities we share. Personally, I think this site provides a great interactive experience with fresh, relevant content—I’m incredibly proud of it. Please tune in and join us—we welcome your ideas and feedback.

In terms of technology, the initiative will focus on three key areas:

Technical computing to the cloud: Microsoft will help lead the way in giving scientists, engineers and analysts the computing power of the cloud. We’re also working to give existing high-performance computing users the ability to augment their on-premises systems with cloud resources that enable ‘just-in-time’ processing. This platform will help ensure processing resources are available whenever they are needed—reliably, consistently and quickly.

Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores. But most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test, and troubleshoot. We know that a consistent model for parallel programming can help more developers unlock the tremendous power in today’s computers and enable a new generation of technical computing. We’re focused on delivering new tools to automate and simplify writing software through parallel processing from the desktop… to the cluster… to the cloud.

Develop powerful new technical computing tools and applications: Scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power using simpler tools to increase the speed of their work, and we’re building a platform with this objective in mind. We expect that these efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration.

The path we’ve taken to arrive at this initiative is built on a foundation of great technology and underpinned by a strong vision for bringing the power of technical computing to those who need it most. Microsoft is committed to this business, and I am looking forward to working with our industry partners and customers to help bring about the next wave of discovery.

The new group falls under Bob Muglia, who is President of Microsoft’s Server and Tools business, but will work closely with various groups in Microsoft Research, company officials said. The three areas of focus of the group and the broader initiative will be cloud, parallel-programming and new technical computing tools. There is a new technical computing community Web site, www.modelingtheworld.com, launching as part of the effort.

If you’re interested in particulars regarding the technical tools, here’s what the Softies are saying (via a spokesperson):

“Windows HPC (High Performance Computing) Server 2008 and all of the capabilities in Visual Studio 2010 that allow developers to take advantage of parallelism (e.g., parallel profiler and debugger and the ConcRT (concurrent) runtime are examples of technology the Technical Computing group has already delivered. In the future we’ll be delivering Technical Computing services on top of Azure that will integrate with desktop applications from Microsoft and partners.” [Emphasis added.]

Microsoft has been quietly building a team of hundreds of people with the mission of giving the world's scientists and engineers the ability to develop and work with complex models of natural and manmade systems much more quickly and easily than they can today.

"It's one of the largest-growth teams in the company right now, and overall one of the biggest bets that we're making strategically," said Bill Hilf, a Microsoft general manager working on the Technical Computing initiative.

As part of the Technical Computing initiative, Microsoft says it's developing a technology platform that will help developers build desktop applications that can tap into large volumes of data and easily harness powerful computers in server clusters and data centers. In addition, the company is developing a new set of technical computing services for its Azure cloud-computing system, to help scientists make better use of the company's worldwide data centers. [Emphasis added.]

The team is also working on ways of developing software better tuned for machines with multiple processors, or computing cores. …

The role of cloud management solutions in the enterprise world is becoming increasingly important. With the interest and adoption of cloud in the enterprise steadily rising, solutions that help an organization to effectively harness, orchestrate, and govern their use of the cloud are floating to the top of the needs list. Developing and delivering solutions in this arena is no small task, and one made even tougher by enterprise user expectations and requirements. Just what are some of the enterprise requirements and expectations for cloud management solutions?

First things first, users expect cloud management solutions to be broadly applicable. What do I mean by that? Take for instance a recent discussion I had with an enterprise user about a management solution for cloud-based middleware platforms. The solution that was the topic of our discussion enables users to create middleware environments, virtualize them, deploy them into a cloud environment, and manage them once they are up and running.

During the course of that discussion, the user told me: "I want one tool to do it all." In this case, all referred to the ability to support multiple virtualization formats, varying hardware platforms, different operating system environments, all cloud domains, and a plethora of middleware software. Of course, the user acknowledged it was a bit of an overreach because when a tool "does it all" it often means that it does nothing, and when I pressed a bit more the real desire was for a single, unified management interface. This of course points back to the notion of open cloud solutions that I wrote about a while back. You will never get a tool that does it all, but if you get open tools, chances are you can build a centralized interface that exposes the capability of many tools, and thus logically presents a "single tool that does it all" to your end users.

In many cases, enterprises adopt cloud computing as a more efficient and agile approach to something they already do today. For example, if I put it into the context of the part of the cloud I deal with, users may leverage the cloud as a means to standup and tear down application environments in a much faster and simpler manner than their traditional approach. No one will argue that faster and simpler is good, but that does not mean you can or should sacrifice the insight and control into these processes that the organization requires. If the enterprise requires a request/approval workflow process for commissioning and decommissioning application environments, the cloud management solution must provide the necessary hooks. In a more generalized sense, cloud management solutions must enable integration into an enterprise's governance framework. Without this integration, the truth is it is likely inapplicable for enterprise use.

If I have learned one thing from users over the past year with respect to enterprise-ready cloud management solutions it is this: Auditability is huge! Organizations want to know who is doing what, when they are doing it, how long they are doing it for, and much more. Users pretty well assume that a cloud management solution provides insight into these kinds of metrics. The obvious use case here is the ability to track cloud usage statistics among various users and groups to facilitate cost allocation and/or chargeback throughout the enterprise. Another, perhaps less obvious, use case concerns configuration change management. The ability to very quickly determine what was changed, when it was changed, and who changed it is crucial when a cloud management solution and the underlying cloud is distributed among a wide set of enterprise users.

The fact is that we are in the beginning phase of the emergence of need for cloud management solutions, and basic requirements and expectations are still in the formative stage. The few listed here are just a start, and some of what I hear most commonly. It will be interesting to watch the shift and increase in these expectations, especially as enterprises adopt federated, highly heterogeneous cloud environments. I certainly welcome any feedback or insight you may have into the need for cloud management solutions.

Lori MacVittie claims In cloud computing environments the clock literally starts ticking the moment an application instance is launched. How long should that take? in her When (Micro)Seconds Matter post to the F5 DevCentral blog:

The term “on-demand” implies right now. In the past, we used the term “real-time” even though what we really meant in most cases was “near time”, or “almost real-time”. The term “elastic” associated with scalability in cloud computing definitions implies on-demand. One would think, then, that this means that spinning up a new instance of an application with the intent to scale a cloud-deployed application to increase capacity would be a fairly quick-executing task.

That doesn’t seem to be the case, however.

Dealing with unexpected load is now nothing more than a 10 minute exercise in easy, seamlessly integrating both cloud and data center services.

A Twitter straw poll on this subject (completely unscientific) indicated an expectation that this process should (and for many does) take approximately two minutes in many cloud environments. Minutes, not seconds. Granted, even that is still a huge improvement over the time it’s taken in the past. Even if the underlying hardware resources are available there’s still all of the organizational IT processes that need to be walked through – requests, approvals, allocation, deployment, testing, and finally the actual act of integrating the application with its supporting network and application delivery network infrastructure. It’s a time-consuming process and is one of the reasons for all the predictions of business users avoiding IT to deploy applications in “the cloud.”

IT capacity planning strategy has been to anticipate the need for additional capacity early enough that the resources are available when the need arises. This has typically resulted in over-provisioning, because it’s based on the anticipation of need, not actual demand. It’s based on historical trends that, while likely accurate, may over or under-estimate the amount of capacity required to meet historical spikes in demand.

IS “FASTER” GOOD ENOUGH?

Cloud computing purports to provide capacity on-demand and allow organizations to better manage resources to mitigate the financial burden associated with over-provisioning and the risks to the business by under-provisioning. The problem is that provisioning resources isn’t an instantaneous process. At a minimum the time associated with spinning up a new instance is going to delay increasing capacity by minutes.

Virtual images don’t (yet) boot up as quickly as would be required to meet an “instant on” demand. The processes by which the application is inserted into the network and application delivery network, too, aren’t instantly executed as there are a series of steps that must occur in the right order to ensure accessibility.

An instance that’s up but not integrated into the ecosystem is of little use, after all, and the dangers associated with missing a critical security step increase risk unnecessarily.

The end result is that capacity planning in the cloud remains very much an anticipatory game with operators attempting to prognosticate from historical trends when more capacity will be required. Operations staff needs to be just as vigilant as they are today in their own data centers to ensure that when the demand does hit a cloud-based application the capacity to meet the demand is already available. If the cloud computing environment requires a “mere ten minutes” to provision more capacity, then the operations staff needs to be ten minutes ahead of demand. It needs to project out those ten minutes and anticipate whether more capacity will be required or not. …

Lori continues her argument and concludes:

If this test isn’t part of your standard “cloud” acquisition process, it should be, because “fast enough” is highly dependent on whether you need capacity available in the next hour, the next minute, or the next second.

Battle of the Public Clouds: Who is Winning? – April 22, 2010

Many prognostications about the public cloud focus on three key vendors: Amazon, Google and Microsoft. This week on cloudchasers, we’ll check out the numbers, platforms and competing visions as we look at each vendor’s place in the market today and in the future. Also, with so many companies jumping on the cloud bandwagon, are there others who are more appropriate for this list? Who do YOU think will be the winner? Will there be just one winner?

When he joined Microsoft, Microsoft's chief software architect Ray Ozzie got a chance to take a step back and look at the technology industry.

What he saw was that the PC wasn't the centre of the computing universe any more – but like Nvidia's Jen-Hsun Huang, he told the Future in Review conference this week that he doesn't think it's going away any time soon either - he also had words to say about the cloud, online privacy, HTML 5 and Apple.

"The world that I see panning out is one where individuals don't shift from 'I'm using exclusively this one thing called a PC as a Swiss army knife for everything I do' to using a different Swiss army knife. The beauty of what's going on in devices is you can imagine a device.

"Previously you could imagine software and build it but hardware was very hard and took a long time to build. Now you can imagine end-to-end device services.

"So there's probably a screen in the car that federates with the phone when you bring it into the car. Will we have a device with us that's always on? Yes. We call it a phone but it's a multi-purpose device.

"Will we also carry something of a larger form factor that we can quickly type on? For many of us, the answer is yes." And what will it look like? "The clamshell style of device is a very useful thing and I think it will be with us for ever. I think there is a role for the desktop too…"

Office, Docs and better productivity

Ozzie's pet projects at Microsoft include the Azure cloud service and the social computing tools like the Spindex social aggregator, the Outlook Social Connector and Facebook Docs (which all come out of the new Fuse Labs Microsoft site near his home town of Boston). …

Mary continues the story and concludes

With only a hint of irony, he says "Facebook is doing us all a favour by pushing the edge and causing the conversations to be very broad."

Should Microsoft be moving faster in mobile, in browsers, in the cloud, into this future world? "We're very impatient in the technology industry," Ozzie points out. "We get very enamoured with the next shiny object. Let's get real here. How many years have any of these things actually been out? How many years have we all been using these pocket internet companions?

"It's actually been a relatively small number of years. We haven't even seen the TV get lit up yet as a communication device; we haven't seen all the screens on the wallbeing lit up as devices. Every single one of these is going to get lit up as a similar kind of device."

Speaking in Australia, noted cryptographer and IT security pioneer Whit Diffie commented on Cloud Computing's potential to destroy current security approaches, but improve security overall for the masses.

"At worst [cloud computing] will fundamentally destroy the current security paradigm," he said. "But on the other hand it's going to substantially improve the average level of security of ordinary shleps who didn't pay any attention to the matter."

Diffie's presentation was another example of the global nature of Cloud Computing, and was made as the world turns its eyes toward Europe next month for Cloud Expo Europe in Prague June 21-22. He became famous in the 70s with breakthrough security-key research, and served as a Distinguished Engineer at Sun for almost 20 years (and was also a Sun Fellow). He was one of those that haven't made the transition from Sun to Oracle, and now serves as a VP with ICANN.

Diffie said he believes that "cloud computing will become very widespread," and that "there's going to be a tremendous security gain by pushing things into standard security practices" if companies start to adapt government security contract models. "Contracts will have to occur very fast" to cater to demand for services needed for only a few minutes or fractions of a second...You've got to know whether those people are capable of fulfilling the contract. They've gone through a set of bureaucratic hurdles so that all of a sudden if a secret contract comes up it can be awarded overnight - there's very little example of that in the civilian world."

He also warned against a rise of that seemingly invariable tendency for any business that's run by humans: proprietary methods. "Above the (open-source) GPL, everything Google does is a trade secret," he noted.

[IT is going to become] a manager of a dynamic supply chain of internal and external resources to deliver business services to internal and external clients.
–Ajei Gopal, CA World day two keynote

CA’s a favorite whipping boy for IT insiders. Their giant, long-lived portfolio and name change inducing events gets most people to snicker when you mention “CA.” They’re a classic big spend enterprise vendor: comprehensive, enterprise priced, and rarely innovation-leading. So, their string of acquisitions of relatively young and hip companies in recent past has left folks befuddled. What exactly is CA doing with 3Tera, Nimsoft, NetQoS, Oblicore, and others?

Stated reasons have been access to new markets (SMB and MSP with Nimsoft) and jamming in cloud (3Tera and, to an extent, NetQoS). The first few days of CA World in Las Vegas have reinforced that messaging: CA is all over the cloud, but with the experienced hand of an elder company. They’re not going to shatter the peace of your glass house…unless you want them to. The cloud is here, but we’ll trickle it in, or gush it in – you pick the speed. Hybrid clouds are the thing, getting around security concerns like “I don’t know where my data is.”

Cloud Comfort

The tone and agenda so far indicates that CA believes their customers are afraid of using cloud technologies, unsure about it. Both keynotes have revolved less around technology, and more around soothing IT about becoming cloud friendly.

It’s like the old folk-lore about instant messaging in enterprises. “No one in my shop is using IM!” CIOs would decry as employee installed and used public IM clients by the thousands. More important, the ease of installing that technology and the heightened communication it brought made IM invaluable, no matter how non-enterprise (that is, not under the control of the corporation for purposes for security, compliance, and SLAs).

So far, CA’s doing a good job talking the cloud talk – even with rational insertions of technologies that do the visionary stuff. At least in presenting, they seem to understand the mapping of cloud practices – mass-automating, charge back-cum-metered billing, etc. – to existing IT management practices and their own technologies. Most folks CA’s size would skim over the actual products you used to put cloud theory into practice. …

Michael concludes:

You can smell big consulting deals looming around there: 6 months to discover all the IT services a company has and then sort out road-maps for cloudizing – or not – each. Then a cycle for acquiring technologies to manage and run the cloud, and so on.

How do you make an in-run around that buzz kill cycle? No one really know at this point. The middleware plus infrastructure portfolio that VMWare is building up starts to look interesting: slapping up a bunch of light-weight interfaces on lumbering legacy and, hopefully, allowing new development to keep legacy IT from tricking into schedule friction. Or you could isolate the old from the new. Who knows? Being caught in this quagmire is the whole point. The guiding question is how any cloud-tooler, like CA, is going to help prevent IT from getting stuck with more legacy IT, cloud-based or not.

Paulo Del Nibletto reports “Company's new CEO outlines a win at all costs cloud computing strategy” as a preface to his CA adds to name and its cloud strategy story of 5/17/2010 for ITBusiness.ca:

This year's CA World is the second time in the last three events where CA augmented its corporate name. Company CEO Bill McCracken announced to the more than 7,000 attendees that CA will now be called CA Technologies.

The former IBM Corp.(NYSE: IBM) PC boss also revealed CA's cloud strategy, which will be to bring security to the cloud. A task that many analysts have said has stunted the cloud's growth.

McCracken told the story about how CA talked about a switch from SAP to Salesforce.com on the cloud to solve its dilemma of providing company employees with sales data on a unified system across all of its geographies.

McCracken was short on specifics leaving that for the rest of the conference, but he was clear on CA's go-to-market cloud strategy. CA will leave it up to the customer.

McCracken said that for CA if it’s on-premise or in a Software-as-a-Service model customers will be asking for different kinds of cloud services and that CA would be offering solutions either from a channel partner or managed service providers that add value or through internal direct sales.

“For us its going to be on premise or SaaS or both. It could affect our base. We know that but it will happen so we need to make it happen and that is one of the reasons why we bought Nimsoft. Customers decide. We will not decide for them,“ he said.

McCracken also said that virtualization will be an integral part of CA's strategy. He said that to support cloud services customers need to virtualize first. CA announced three new programs for this line of business: Virtual Automation, Virtual Assurance and Virtual Configuration that will manage physical and virtual machines and help customers move from the glass house to the virtual world and eventually the cloud.

Learn about Windows Azure and Windows Phone development together in this day packed with training and coding. Register now for your chance to learn the latest development techniques with Windows Azure and Windows Phone and your chance to win a Zune HD.

Light breakfast and lunch included, followed by a networking reception.

This 4th annual San Francisco enterprise software development conference designed for team leads, architects and project management is back! Bloggers wrote about 32 of the 60 sessions at last year’s event, read this article to see what the attendees said. There is no other event in the US with similar opportunities for learning, networking, and tracking innovation occurring in the Java, .NET, Ruby, SOA, Agile, and architecture communities.

CA is expanding its partnership with Cisco and unveiling several new management products to improve cloud computing and virtualization deployments, the company said in a series of announcements Monday at CA World in Las Vegas. The new products are based partly on technology acquired in CA's recent buying spree, in which the company purchased vendors 3Tera, Oblicore and Cassatt.

CA also said it is changing its name slightly from CA, Inc. to CA Technologies, to reflect a broad strategy of managing "IT resources from the mainframe to the cloud, and everything in between."

CA's partnership with Cisco includes integration of CA system management software with Cisco's Unified Computing System, letting IT pros control the Cisco technology from within the CA management interface.

Separately from the Cisco partnership, CA is announcing a variety of software tools to manage virtual computing resources and cloud-based systems.

While 60% of CA's $4.2 billion business is related to the mainframe, the company is making cloud computing one of its main focuses, along with security, software-as-a-service-based IT management and virtualization management, says Tom Kendra, vice president of enterprise products and solutions.

Managing the new virtualization layer and cloud-based services is no easy task, in part because the technologies have been installed in addition to -- rather than replacements of -- existing IT infrastructure, and require a heterogeneous management approach, Kendra says..

CA's cloud strategy centers around a new "Cloud-Connected Management Suite" that includes four products. Those include Cloud Insight, for assessing how internal and external IT services relate to business priorities; Cloud Compose, for creating, deploying and managing composite services in a cloud; Cloud Optimize, which optimize use of both internal and external IT resources for cost and performance; and Cloud Orchestrate, which "will provide workflow control and policy-based automation of changes to service infrastructures."

CA also announced three virtualization products, which are Virtual Assurance, for monitoring, event correlation and fault and performance management in virtual environments; Virtual Automation, which provides automated self-service virtual machine life-cycle management; and Virtual Configuration, which manages sprawl, and tracks configuration changes to meet regulatory compliance and audit needs.

The products will start hitting the market in June, but CA did not offer more specific availability or pricing information.

In his keynote address at CA World 2010, Bill McCracken, chief executive officer of CA Technologies, told 7,000 attendees that the technology industry is at an inflection point, and that business will embrace virtualization and cloud computing in order to remain competitive.

"When economic conditions, technology advances, and customer needs align, transformation happens," said McCracken. "As we emerge from the global economic downturn, we have a tremendous opportunity to leap forward and embrace change, or risk being left behind."

McCracken also described a vision for how all businesses will evolve. "People still ask if I think the cloud is really going to happen. I say no; I don't think it's going to happen. I know it is going to happen because it is happening now. Virtualization and cloud computing will enable businesses to adapt to rapidly changing market and customer needs. We will be right there to help our customers gain a competitive advantage as this critical inflection point in our industry takes hold."

"Running IT in a cloud-connected enterprise will be more like running a supply chain, where organizations can tap into the IT services as needed - specifying when, where and precisely how they are delivered," he said. "This has never been more important, because business models no longer change every few years or even once a year. Cycles are increasingly shorter, which puts a whole new set of demands on the CIO and on the organization."

McCracken also spoke about the evolution of the company name from CA to CA Technologies.

"The name CA Technologies acknowledges our past and points to our future as a leader in delivering the technologies that will revolutionize the way IT powers business agility," said McCracken. "We are executing on a bold strategy, where IT resources -- from the cloud to the mainframe and everything in between -- are delivered with unprecedented levels of flexibility."

IT professionals and customers from around the world are attending CA World to get insights into what is happening in the IT management space and to learn how to best leverage CA Technologies to maximize their organization's IT capabilities. The user conference kicks off today and ends May 20.

In this newsletter, we are excited to announce Amazon CloudFront's access log feature is now enabled for streaming distributions - read more below. This month's newsletter also highlights AWS's global expansion with our new Singapore Region, Amazon VPC availability in Europe, and Amazon RDS availability in Northern California. In May and June, we have a full calendar of events taking place around the world and many virtual events, we hope you can join us.

Just Announced: Amazon CloudFront Access Logs For StreamingWe're excited to announce Amazon CloudFront's access log feature is now enabled for streaming distributions. Now, every time you stream a video using AWS’s easy to use content delivery service, you can capture a detailed record of your viewer’s activity. In addition, Amazon CloudFront will record the edge location serving the stream, the viewer's IP address, the number of bytes sent, and several other data elements. There are no additional charges for access logs, beyond normal Amazon S3 rates to write, store and access the logs. You can read more about this new feature. …

I saw a spate of recent articles that had some pretty amazing statistics and news bits on Amazon Web Services and competitors. In no particular order:

A survey of 600 developers by Mashery reported that 69% of respondents said Amazon, Google, and Twitter were the most popular API’s they were using.

Even the Federal Government is turning to the Amazon Cloud to save money. Sam Diaz reports the move of Recovery.gov will amount to hundreds of thousands of dollars. We found tremendous savings at Helpstream from our move to the Amazon Cloud.

Derrick Harris at GigaOm suggests its time for Amazon to roll out a PaaS to remain competitive. As an aside, are you as tired of all the “*aaS” acronyms as me? Are they helping us to understand anything better? BTW, I think the move Harris suggests would be the wrong move for Amazon because it would lead to them competiting with customers who are adding PaaS layers to Amazon. They should stay low-level and as language/OS agnostic as possible in order to remain as Switzerland. Let Heroku-like offerings be built on top of the Amazon infrastructure by others. Amazon doesn’t need to add a PaaS and they don’t need to add more value because they’re afraid of being commoditized. As we shall see below, they are the commoditizers everyone else needs to be afraid of.

Bob continues by citing more paeans to Amazon’s prowess in the IaaS market and concludes:

What does it all mean?

If nothing else, Amazon has a pretty amazing lead over other would-be Cloud competitors. And they’re building barriers to entry of several kinds:

Nobody but Amazon has the experience of running a Cloud service on this scale. They can’t help but be learning important things about how to do it well that potential competitors have yet to discover.

There is a growing community of developers whose Cloud education is all about Amazon. Software Developers as a group like to talk a good game about learning new things, but they also like being experts. When you ask them to drop their familiar tools and start from scratch with something new you take away their expert status. There will be a growing propensity among them to choose Amazon for new projects at new jobs simply because that is what they know.

Economies of Scale. Consider what kind of budget Amazon’s competitors have to pony up to build a competing Cloud infrastructure. A couple of small or medium-sized data centers won’t do it. Google already has tons of data centers, but many other companies that haven’t had much Cloud presence are faced with huge up front investments that grow larger day by day to catch up to Amazon.

Network effects. There is latency moving data in and out of the Cloud. It is not significant for individual users, but it is huge for entire applications. The challenge is to move all the data for an application during a maintenance window despite the latency issue. However, once the data is already in the Cloud, latency to other applications in the same Cloud is low. Hence data is accretive. The more data that goes into a particular Cloud, the more data wants to accrete to the same Cloud if the applications are at all interconnected.

It’s going to be interesting to watch it all unfold. It’s still relatively early days, but Amazon’s competitors need to rev up pretty soon. Amazon is stealing the Cloud at an ever-increasing rate.

AzureDirectory

AzureDirectory smartly uses local file storage to cache files as they are created and automatically pushes them to blob storage as appropriate. Likewise, it smartly caches blob files back to the a client when they change. This provides with a nice blend of just in time syncing of data local to indexers or searchers across multiple machines.

With the flexibility that Lucene provides over data in memory versus storage and the just in time blob transfer that AzureDirectory provides you have great control over the composibility of where data is indexed and how it is consumed. To be more concrete: you can have 1..N Worker roles adding documents to an index, and 1..N searcher Web roles searching over the catalog in near real time.

(Remember that each Worker and Web role incurs individual compute charges of $0.12/hour.)

Thermous continues with with sample code and a reference to “a LINQ to Lucene provider on CodePlex, which allows you to define your schema as a strongly typed object and execute LINQ expressions against the index.”

SQL Azure currently supports 1 GB and 10 GB databases. If you want to store larger amounts of data in SQL Azure you can divide your tables across multiple SQL Azure databases. This article will discuss how to use a middle layer to join two tables on different SQL Azure databases using LINQ. This technique vertically partitions your data in SQL Azure.

In this version of vertically partitioning for SQL Azure we are dividing all the tables in the schema across two or more SQL Azure databases. In choosing which tables to group together on a single database you need to understand how large each of your tables are and their potential future growth – the goal is to evenly distribute the tables so that each database is the same size.

There is also a performance gain to be obtained from partitioning your database. Since SQL Azure spreads your databases across different physical machines, you can get more CPU and RAM resources by partitioning your workload. For example, if you partition your database across 10 - 1 GB SQL Azure databases you get 10X the CPU and memory resources. There is a case study (found here) by TicketDirect, who partitioning their workload across hundreds of SQL Azure databases during peak load.

When partitioning your workload across SQL Azure databases, you lose some of the features of having all the tables in a single database. Some of the considerations when using this technique include:

Foreign keys across databases are not support. In other words, a primary key in a lookup table in one database cannot be referenced by a foreign key in a table on another database. This is a similar restriction to SQL Server’s cross database support for foreign keys.

You cannot have transactions that span databases, even if you are using Microsoft Distributed Transaction Manager on the client side. This means that you cannot rollback an insert on one database, if an insert on another database fails. This restriction can be negated through client side coding – you need to catch exceptions and execute “undo” scripts against the successfully completed statements.

SQLAzureHelper Class

In order to accomplish vertical partitioning we are introduc[ing] the SQLAzureHelperclass, which:

Implements forward read only cursors for performance.

Support[s] IEnumerable and LINQ

Disposes of the connection and the data reader when the result set is no longer needed.

This code has the performance advantage of using forward read only cursors, which means that that data is not fetched from SQL Azure until it is needed for the join.

The result sets return from the two SQL Server databases [for] a join by LINQ [below].

LINQ

LINQ is a set of extensions to the .NET Framework that encompass language-integrated query, set, and transform operations. It extends C# and Visual Basic with native language syntax for queries and provides class libraries to take advantage of these capabilities. You can learn more about LINQ here. This code is using LINQ as client-side query processor to perform the joining and querying of the two result sets.

This code takes the result sets and joins them based on CompanyId, then selects a new class comprised of CompanyName and ColorName.

Connections and SQL Azure

One thing to note is that the code above doesn’t take into account the retry scenario mention in our previous blog post. This has been done to simp[li]fy the example. The retry code needs to go outside of the SQLAzureHelper class to completely re-execute the LINQ query.

In our next blog post we will demonstrate horizontal partitioning using the SQLAzureHelper class.

I’m glad to see the beginning of some concrete advice for SQL Azure database partitioning. However, the forthcoming availability of 50-GB Azure databases will considerably reduce the need for partitioning in departmental-level projects.

Microsoft Codename "Dallas" is a new cloud service that provides a global marketplace for information including data, web services, and analytics. Dallas makes it easy for potential subscribers to locate a dataset that addresses their needs through rich discovery. When they have selected the dataset, Dallas enables information workers to begin analyzing the data and integrating it into their documents, spreadsheets, and databases.

Similarly, developers can write code to consume the datasets on any platform or simply include the automatically created proxy classes. Applications from simple mash-ups to complex data-driven analysis tools are possible with the rich data and services provided. Applications can run on any platform including mobile phones and Web pages. When users begin regularly using data, managers can view usage at any time to predict costs.

Dallas also provides a complete billing infrastructure that scales smoothly from occasional queries to heavy traffic. For subscribers, Dallas becomes even more valuable when there are multiple subscriptions to different datasets: although there may be multiple content providers involved, data access methods, reporting and billing remains consistent.

For content providers, Dallas represents an ideal way to market valuable data and a ready-made solution to e-commerce, billing, and scaling challenges in a multi-tenant environment – providing a global marketplace and integration points into Microsoft’s information worker assets.

Dave Kearns asserts “Data breeches can occur when not enough attention is paid to account and access governance” in a preface to his Revealing the 'cracks' in provisioning post of 5/17/2010:

At the recent European Identity Conference, Cyber-Ark's Shlomi Dinoor (he's vice president of Emerging Technologies) emphasized to me that nothing is ever 100% in IdM. While our topic was "Security and Data Portability in the Cloud" he wanted to remind me that provisioning -- the oldest of IdM services -- was still somewhat problematic. He did this by pointing me to a recent article in Dark Reading: "Database Account-Provisioning Errors A Major Cause Of Breaches."

In the article author Ericka Chickowski points to a recent data breech:

"Take the case of Scott Burgess, 45, and Walter Puckett, 39, a pair of database raiders who were indicted this winter for stealing information from their former employer, Stens Corp. Burgess and Puckett carried out their thievery for up to two years after they left Stens simply by using their old account credentials, which were left unchanged following their departures. Even after accounts were changed, the duo were subsequently able to use different log-in credentials to continue pilfering information."

The problem is that too often we concentrate on the mechanisms of provisioning (and even de-provisioning) without paying enough attention to account and access governance.

But even more problematic can be those accounts that aren't particularly identified with a user.

Phil Lieberman, of Lieberman Software (who was also with me in Munich), says that organizations: "have to ask themselves the question, 'Where do we have accounts? Tell me all of the places where we have accounts, and tell me all the things they use these accounts for.'" He goes on to say: "And the second question is, 'So we're using these accounts -- when were those passwords changed? And if we're using those accounts, what is the ACL [access control list] system we're using, and when was the last time we checked the ACL system?' And finally, 'We have audit logs being generated by these databases -- are we analyzing these audit logs looking for patterns that indicate abuse?'"

Lieberman and Dinoor both represent companies in the "emerging" (in quotes, because the discipline goes back dozens of years, yet it's a hot topic today) Privileged User Management (PUM) space, also called PAM (Privileged Access Management) or PIM (Privileged Identity Management). PUM is the discipline to create, maintain and remove critical accounts (administrator on Windows, root on Unix, the DbA on a database and so on). These accounts represent the "cracks" in provisioning through which data gets breeched. If reading the article noted above gives you pause, you should check out the offerings from Cyber-Ark and Lieberman Software. It might help you sleep better at night.

The MonitorGrid cloud app runs on Azure and is wired with Linxter. Linxter allows for secure, reliable, two-way communication, regardless of the number of intermediary networks involved and regardless of whether or not they are secure.

I’m signing up to compare MonitorGrid with mon.itor.us and Pingdom. You’ll need to follow the instructions form this 00:19:07 Linxter Azure Integration Tutorial video to add the Linxter server features to your Azure project. You can download the Azure deom solution file from the Linxter Developer site.

Paspartu is French for “one size fits all”. Recently I’ve been coming across posts explaining and “promoting” the idea of spawning threads inside a worker role each one of them with a unique work to be done. All are sharing the same idea and all of them are describing the same thing.

The idea

You have some work to do, but you want to do it with the most efficient way, without having underutilized resources, which is one of the benefits of cloud computing anyway.

The implementation

You have a worker process (Worker Role on Windows Azure) which processes some data. Certainly that’s a good implementation but it’s not a best practice. Most of the time, your instance will be underutilized, unless your doing some CPU and memory intensive work and you have a continuous flow of data to be processed.

In another implementation, we created a Master-Slave pattern. A master distributes work to other slave worker roles, roles are picking up their work, do their stuff, return result and start over again. Still, in some cases that’s not the best idea either. Same cons as before. Underutilized resources, high risk of failure. If the master dies, unless properly designed, your system dies. You can’t process any data.

So, another one appeared. Inside a worker role, spawn multiple threads, running their own processes or methods, doing their work and return result. Underutilization is minimized, Thread Pool is doing all the hard work for us and as soon as .NET 4.0 is supported on Windows Azure, parallelization is easy and, allow me to say, mandatory. But what happens if the worker instance dies? Or restarts? Yes, your guess is correct. You lose all threads and all the processing done by that moment, is lost, unless you persist it somehow. If you had multiple instances of your worker role to imitate that behavior, that wouldn’t happen. You’ll only lose data from the instance that died.

As Eugenio Pace says “You have to be prepared to fail” and he’s right. Every single moment, your instance can die, without a single notice and you have to be prepared to deal with it.

Oh, boy.

So really, there is no single solution or best practice. For me, it’s best guidance. Depending on your scenario, one of the solutions above or even a new one, can fit better for you than for others. Every project is unique and has to be treated as such. Try to think out of the box and remember that this is deep water for everyone. It is just some of us swim better..

It looks like I’m only doing sessions lately :-) Here’s another slide deck for a presentation I did on the Architect Forum last week in Belgium.

Abstract: “No, this session is not about greener IT. Learn about using the RoleEnvironment and diagnostics provided by Windows Azure. Communication between roles, logging and automatic upscaling of your application are just some of the possibilities of what you can do if you know about how the Windows Azure environment works.”

With IPL Season 3 occupying the mindshare of cricket fans today, sportsmen are gearing up to put their best foot forward in the cricket arena. In this competitive scenario, technology is expected to play a key role.

Vendors too are looking to cater to this attractive market through a variety of delivery models. The Cloud is a natural fit in this overall strategy. For example, SportingMindz, a Bangalore based organization providing analytical solutions and services to sports organizations, has partnered with Microsoft India for the IPL3 series. The firm has migrated its cricket match analysis product, 22yardz, to the Windows Azure Platform. 22yardz is currently being used by Royal Challengers Bangalore and Kings XI Punjab. [Emphasis added.]

22yardz is a cricket match analysis software designed to analyze the different aspects in a live match scenario giving the detailed statistics along with the strategy of oppositions and player analysis in all departments of the match with seamless integration of videos. The cloud model has helped SportinMindz address pain points such as performance, scalability and availability.

Microsoft Research’s eScience Group is focused on researching ways that information technology (IT) can help solve scientific problems. Dr. Catharine van Ingen, a Partner Architect in Microsoft Research’s eScience Group, talks in this video about how she and others in Microsoft Research have worked with scientists at the University of California, Berkeley and Lawrence Berkeley National Laboratory to address the computing needs in managing Northern California’s Russian River Valley watershed. In this project, Microsoft's Windows Azure cloud-computing platform was used in helping these researchers to manage massive amounts of data in scalable way.

To watch the video and learn more about how Windows Azure was used, click here.

Technology is transforming our ability to measure, monitor and model how the world behaves. This has profound implications for scientific research and can transform the way we tackle global challenges such as health care and climate change. This transformation also will have a huge impact on engineering and business, delivering breakthroughs and discoveries that could lead to new products, new businesses – even new industries.

Today, we’re proud to introduce Microsoft’s Technical Computing initiative, a new effort focused on empowering millions of the world’s smartest problem-solvers. We’ve designed this initiative to bring supercomputing power and resources to a much wider group of the scientists, engineers and analysts who are using modeling and prediction to solve some of the world’s most difficult challenges.

Our goal is to create technical computing solutions that speed discovery, invention and innovation. Soon, complicated tasks such building a sophisticated computer model – which would typically take a team of advanced software programmers months to build and days to run – will be accomplished in an afternoon by a single scientist, engineer or analyst. Rather than grappling with complicated technology, they’ll be able to spend more time on important work.

As part of this initiative we’re also bringing together some of the brightest minds in the technical computing community at www.modelingtheworld.com to discuss the trends, challenges and opportunities we share. Personally, I think this site provides a great interactive experience with fresh, relevant content—I’m incredibly proud of it. Please tune in and join us—we welcome your ideas and feedback.

In terms of technology, the initiative will focus on three key areas:

Technical computing to the cloud: Microsoft will help lead the way in giving scientists, engineers and analysts the computing power of the cloud. We’re also working to give existing high-performance computing users the ability to augment their on-premises systems with cloud resources that enable ‘just-in-time’ processing. This platform will help ensure processing resources are available whenever they are needed—reliably, consistently and quickly.

Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores. But most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test, and troubleshoot. We know that a consistent model for parallel programming can help more developers unlock the tremendous power in today’s computers and enable a new generation of technical computing. We’re focused on delivering new tools to automate and simplify writing software through parallel processing from the desktop… to the cluster… to the cloud.

Develop powerful new technical computing tools and applications: Scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power using simpler tools to increase the speed of their work, and we’re building a platform with this objective in mind. We expect that these efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration.

The path we’ve taken to arrive at this initiative is built on a foundation of great technology and underpinned by a strong vision for bringing the power of technical computing to those who need it most. Microsoft is committed to this business, and I am looking forward to working with our industry partners and customers to help bring about the next wave of discovery.

The new group falls under Bob Muglia, who is President of Microsoft’s Server and Tools business, but will work closely with various groups in Microsoft Research, company officials said. The three areas of focus of the group and the broader initiative will be cloud, parallel-programming and new technical computing tools. There is a new technical computing community Web site, www.modelingtheworld.com, launching as part of the effort.

If you’re interested in particulars regarding the technical tools, here’s what the Softies are saying (via a spokesperson):

“Windows HPC (High Performance Computing) Server 2008 and all of the capabilities in Visual Studio 2010 that allow developers to take advantage of parallelism (e.g., parallel profiler and debugger and the ConcRT (concurrent) runtime are examples of technology the Technical Computing group has already delivered. In the future we’ll be delivering Technical Computing services on top of Azure that will integrate with desktop applications from Microsoft and partners.” [Emphasis added.]

Microsoft has been quietly building a team of hundreds of people with the mission of giving the world's scientists and engineers the ability to develop and work with complex models of natural and manmade systems much more quickly and easily than they can today.

"It's one of the largest-growth teams in the company right now, and overall one of the biggest bets that we're making strategically," said Bill Hilf, a Microsoft general manager working on the Technical Computing initiative.

As part of the Technical Computing initiative, Microsoft says it's developing a technology platform that will help developers build desktop applications that can tap into large volumes of data and easily harness powerful computers in server clusters and data centers. In addition, the company is developing a new set of technical computing services for its Azure cloud-computing system, to help scientists make better use of the company's worldwide data centers. [Emphasis added.]

The team is also working on ways of developing software better tuned for machines with multiple processors, or computing cores. …

The role of cloud management solutions in the enterprise world is becoming increasingly important. With the interest and adoption of cloud in the enterprise steadily rising, solutions that help an organization to effectively harness, orchestrate, and govern their use of the cloud are floating to the top of the needs list. Developing and delivering solutions in this arena is no small task, and one made even tougher by enterprise user expectations and requirements. Just what are some of the enterprise requirements and expectations for cloud management solutions?

First things first, users expect cloud management solutions to be broadly applicable. What do I mean by that? Take for instance a recent discussion I had with an enterprise user about a management solution for cloud-based middleware platforms. The solution that was the topic of our discussion enables users to create middleware environments, virtualize them, deploy them into a cloud environment, and manage them once they are up and running.

During the course of that discussion, the user told me: "I want one tool to do it all." In this case, all referred to the ability to support multiple virtualization formats, varying hardware platforms, different operating system environments, all cloud domains, and a plethora of middleware software. Of course, the user acknowledged it was a bit of an overreach because when a tool "does it all" it often means that it does nothing, and when I pressed a bit more the real desire was for a single, unified management interface. This of course points back to the notion of open cloud solutions that I wrote about a while back. You will never get a tool that does it all, but if you get open tools, chances are you can build a centralized interface that exposes the capability of many tools, and thus logically presents a "single tool that does it all" to your end users.

In many cases, enterprises adopt cloud computing as a more efficient and agile approach to something they already do today. For example, if I put it into the context of the part of the cloud I deal with, users may leverage the cloud as a means to standup and tear down application environments in a much faster and simpler manner than their traditional approach. No one will argue that faster and simpler is good, but that does not mean you can or should sacrifice the insight and control into these processes that the organization requires. If the enterprise requires a request/approval workflow process for commissioning and decommissioning application environments, the cloud management solution must provide the necessary hooks. In a more generalized sense, cloud management solutions must enable integration into an enterprise's governance framework. Without this integration, the truth is it is likely inapplicable for enterprise use.

If I have learned one thing from users over the past year with respect to enterprise-ready cloud management solutions it is this: Auditability is huge! Organizations want to know who is doing what, when they are doing it, how long they are doing it for, and much more. Users pretty well assume that a cloud management solution provides insight into these kinds of metrics. The obvious use case here is the ability to track cloud usage statistics among various users and groups to facilitate cost allocation and/or chargeback throughout the enterprise. Another, perhaps less obvious, use case concerns configuration change management. The ability to very quickly determine what was changed, when it was changed, and who changed it is crucial when a cloud management solution and the underlying cloud is distributed among a wide set of enterprise users.

The fact is that we are in the beginning phase of the emergence of need for cloud management solutions, and basic requirements and expectations are still in the formative stage. The few listed here are just a start, and some of what I hear most commonly. It will be interesting to watch the shift and increase in these expectations, especially as enterprises adopt federated, highly heterogeneous cloud environments. I certainly welcome any feedback or insight you may have into the need for cloud management solutions.

Lori MacVittie claims In cloud computing environments the clock literally starts ticking the moment an application instance is launched. How long should that take? in her When (Micro)Seconds Matter post to the F5 DevCentral blog:

The term “on-demand” implies right now. In the past, we used the term “real-time” even though what we really meant in most cases was “near time”, or “almost real-time”. The term “elastic” associated with scalability in cloud computing definitions implies on-demand. One would think, then, that this means that spinning up a new instance of an application with the intent to scale a cloud-deployed application to increase capacity would be a fairly quick-executing task.

That doesn’t seem to be the case, however.

Dealing with unexpected load is now nothing more than a 10 minute exercise in easy, seamlessly integrating both cloud and data center services.

A Twitter straw poll on this subject (completely unscientific) indicated an expectation that this process should (and for many does) take approximately two minutes in many cloud environments. Minutes, not seconds. Granted, even that is still a huge improvement over the time it’s taken in the past. Even if the underlying hardware resources are available there’s still all of the organizational IT processes that need to be walked through – requests, approvals, allocation, deployment, testing, and finally the actual act of integrating the application with its supporting network and application delivery network infrastructure. It’s a time-consuming process and is one of the reasons for all the predictions of business users avoiding IT to deploy applications in “the cloud.”

IT capacity planning strategy has been to anticipate the need for additional capacity early enough that the resources are available when the need arises. This has typically resulted in over-provisioning, because it’s based on the anticipation of need, not actual demand. It’s based on historical trends that, while likely accurate, may over or under-estimate the amount of capacity required to meet historical spikes in demand.

IS “FASTER” GOOD ENOUGH?

Cloud computing purports to provide capacity on-demand and allow organizations to better manage resources to mitigate the financial burden associated with over-provisioning and the risks to the business by under-provisioning. The problem is that provisioning resources isn’t an instantaneous process. At a minimum the time associated with spinning up a new instance is going to delay increasing capacity by minutes.

Virtual images don’t (yet) boot up as quickly as would be required to meet an “instant on” demand. The processes by which the application is inserted into the network and application delivery network, too, aren’t instantly executed as there are a series of steps that must occur in the right order to ensure accessibility.

An instance that’s up but not integrated into the ecosystem is of little use, after all, and the dangers associated with missing a critical security step increase risk unnecessarily.

The end result is that capacity planning in the cloud remains very much an anticipatory game with operators attempting to prognosticate from historical trends when more capacity will be required. Operations staff needs to be just as vigilant as they are today in their own data centers to ensure that when the demand does hit a cloud-based application the capacity to meet the demand is already available. If the cloud computing environment requires a “mere ten minutes” to provision more capacity, then the operations staff needs to be ten minutes ahead of demand. It needs to project out those ten minutes and anticipate whether more capacity will be required or not. …

Lori continues her argument and concludes:

If this test isn’t part of your standard “cloud” acquisition process, it should be, because “fast enough” is highly dependent on whether you need capacity available in the next hour, the next minute, or the next second.

Battle of the Public Clouds: Who is Winning? – April 22, 2010

Many prognostications about the public cloud focus on three key vendors: Amazon, Google and Microsoft. This week on cloudchasers, we’ll check out the numbers, platforms and competing visions as we look at each vendor’s place in the market today and in the future. Also, with so many companies jumping on the cloud bandwagon, are there others who are more appropriate for this list? Who do YOU think will be the winner? Will there be just one winner?

When he joined Microsoft, Microsoft's chief software architect Ray Ozzie got a chance to take a step back and look at the technology industry.

What he saw was that the PC wasn't the centre of the computing universe any more – but like Nvidia's Jen-Hsun Huang, he told the Future in Review conference this week that he doesn't think it's going away any time soon either - he also had words to say about the cloud, online privacy, HTML 5 and Apple.

"The world that I see panning out is one where individuals don't shift from 'I'm using exclusively this one thing called a PC as a Swiss army knife for everything I do' to using a different Swiss army knife. The beauty of what's going on in devices is you can imagine a device.

"Previously you could imagine software and build it but hardware was very hard and took a long time to build. Now you can imagine end-to-end device services.

"So there's probably a screen in the car that federates with the phone when you bring it into the car. Will we have a device with us that's always on? Yes. We call it a phone but it's a multi-purpose device.

"Will we also carry something of a larger form factor that we can quickly type on? For many of us, the answer is yes." And what will it look like? "The clamshell style of device is a very useful thing and I think it will be with us for ever. I think there is a role for the desktop too…"

Office, Docs and better productivity

Ozzie's pet projects at Microsoft include the Azure cloud service and the social computing tools like the Spindex social aggregator, the Outlook Social Connector and Facebook Docs (which all come out of the new Fuse Labs Microsoft site near his home town of Boston). …

Mary continues the story and concludes

With only a hint of irony, he says "Facebook is doing us all a favour by pushing the edge and causing the conversations to be very broad."

Should Microsoft be moving faster in mobile, in browsers, in the cloud, into this future world? "We're very impatient in the technology industry," Ozzie points out. "We get very enamoured with the next shiny object. Let's get real here. How many years have any of these things actually been out? How many years have we all been using these pocket internet companions?

"It's actually been a relatively small number of years. We haven't even seen the TV get lit up yet as a communication device; we haven't seen all the screens on the wallbeing lit up as devices. Every single one of these is going to get lit up as a similar kind of device."

Speaking in Australia, noted cryptographer and IT security pioneer Whit Diffie commented on Cloud Computing's potential to destroy current security approaches, but improve security overall for the masses.

"At worst [cloud computing] will fundamentally destroy the current security paradigm," he said. "But on the other hand it's going to substantially improve the average level of security of ordinary shleps who didn't pay any attention to the matter."

Diffie's presentation was another example of the global nature of Cloud Computing, and was made as the world turns its eyes toward Europe next month for Cloud Expo Europe in Prague June 21-22. He became famous in the 70s with breakthrough security-key research, and served as a Distinguished Engineer at Sun for almost 20 years (and was also a Sun Fellow). He was one of those that haven't made the transition from Sun to Oracle, and now serves as a VP with ICANN.

Diffie said he believes that "cloud computing will become very widespread," and that "there's going to be a tremendous security gain by pushing things into standard security practices" if companies start to adapt government security contract models. "Contracts will have to occur very fast" to cater to demand for services needed for only a few minutes or fractions of a second...You've got to know whether those people are capable of fulfilling the contract. They've gone through a set of bureaucratic hurdles so that all of a sudden if a secret contract comes up it can be awarded overnight - there's very little example of that in the civilian world."

He also warned against a rise of that seemingly invariable tendency for any business that's run by humans: proprietary methods. "Above the (open-source) GPL, everything Google does is a trade secret," he noted.

[IT is going to become] a manager of a dynamic supply chain of internal and external resources to deliver business services to internal and external clients.
–Ajei Gopal, CA World day two keynote

CA’s a favorite whipping boy for IT insiders. Their giant, long-lived portfolio and name change inducing events gets most people to snicker when you mention “CA.” They’re a classic big spend enterprise vendor: comprehensive, enterprise priced, and rarely innovation-leading. So, their string of acquisitions of relatively young and hip companies in recent past has left folks befuddled. What exactly is CA doing with 3Tera, Nimsoft, NetQoS, Oblicore, and others?

Stated reasons have been access to new markets (SMB and MSP with Nimsoft) and jamming in cloud (3Tera and, to an extent, NetQoS). The first few days of CA World in Las Vegas have reinforced that messaging: CA is all over the cloud, but with the experienced hand of an elder company. They’re not going to shatter the peace of your glass house…unless you want them to. The cloud is here, but we’ll trickle it in, or gush it in – you pick the speed. Hybrid clouds are the thing, getting around security concerns like “I don’t know where my data is.”

Cloud Comfort

The tone and agenda so far indicates that CA believes their customers are afraid of using cloud technologies, unsure about it. Both keynotes have revolved less around technology, and more around soothing IT about becoming cloud friendly.

It’s like the old folk-lore about instant messaging in enterprises. “No one in my shop is using IM!” CIOs would decry as employee installed and used public IM clients by the thousands. More important, the ease of installing that technology and the heightened communication it brought made IM invaluable, no matter how non-enterprise (that is, not under the control of the corporation for purposes for security, compliance, and SLAs).

So far, CA’s doing a good job talking the cloud talk – even with rational insertions of technologies that do the visionary stuff. At least in presenting, they seem to understand the mapping of cloud practices – mass-automating, charge back-cum-metered billing, etc. – to existing IT management practices and their own technologies. Most folks CA’s size would skim over the actual products you used to put cloud theory into practice. …

Michael concludes:

You can smell big consulting deals looming around there: 6 months to discover all the IT services a company has and then sort out road-maps for cloudizing – or not – each. Then a cycle for acquiring technologies to manage and run the cloud, and so on.

How do you make an in-run around that buzz kill cycle? No one really know at this point. The middleware plus infrastructure portfolio that VMWare is building up starts to look interesting: slapping up a bunch of light-weight interfaces on lumbering legacy and, hopefully, allowing new development to keep legacy IT from tricking into schedule friction. Or you could isolate the old from the new. Who knows? Being caught in this quagmire is the whole point. The guiding question is how any cloud-tooler, like CA, is going to help prevent IT from getting stuck with more legacy IT, cloud-based or not.

Paulo Del Nibletto reports “Company's new CEO outlines a win at all costs cloud computing strategy” as a preface to his CA adds to name and its cloud strategy story of 5/17/2010 for ITBusiness.ca:

This year's CA World is the second time in the last three events where CA augmented its corporate name. Company CEO Bill McCracken announced to the more than 7,000 attendees that CA will now be called CA Technologies.

The former IBM Corp.(NYSE: IBM) PC boss also revealed CA's cloud strategy, which will be to bring security to the cloud. A task that many analysts have said has stunted the cloud's growth.

McCracken told the story about how CA talked about a switch from SAP to Salesforce.com on the cloud to solve its dilemma of providing company employees with sales data on a unified system across all of its geographies.

McCracken was short on specifics leaving that for the rest of the conference, but he was clear on CA's go-to-market cloud strategy. CA will leave it up to the customer.

McCracken said that for CA if it’s on-premise or in a Software-as-a-Service model customers will be asking for different kinds of cloud services and that CA would be offering solutions either from a channel partner or managed service providers that add value or through internal direct sales.

“For us its going to be on premise or SaaS or both. It could affect our base. We know that but it will happen so we need to make it happen and that is one of the reasons why we bought Nimsoft. Customers decide. We will not decide for them,“ he said.

McCracken also said that virtualization will be an integral part of CA's strategy. He said that to support cloud services customers need to virtualize first. CA announced three new programs for this line of business: Virtual Automation, Virtual Assurance and Virtual Configuration that will manage physical and virtual machines and help customers move from the glass house to the virtual world and eventually the cloud.

Learn about Windows Azure and Windows Phone development together in this day packed with training and coding. Register now for your chance to learn the latest development techniques with Windows Azure and Windows Phone and your chance to win a Zune HD.

Light breakfast and lunch included, followed by a networking reception.

This 4th annual San Francisco enterprise software development conference designed for team leads, architects and project management is back! Bloggers wrote about 32 of the 60 sessions at last year’s event, read this article to see what the attendees said. There is no other event in the US with similar opportunities for learning, networking, and tracking innovation occurring in the Java, .NET, Ruby, SOA, Agile, and architecture communities.

CA is expanding its partnership with Cisco and unveiling several new management products to improve cloud computing and virtualization deployments, the company said in a series of announcements Monday at CA World in Las Vegas. The new products are based partly on technology acquired in CA's recent buying spree, in which the company purchased vendors 3Tera, Oblicore and Cassatt.

CA also said it is changing its name slightly from CA, Inc. to CA Technologies, to reflect a broad strategy of managing "IT resources from the mainframe to the cloud, and everything in between."

CA's partnership with Cisco includes integration of CA system management software with Cisco's Unified Computing System, letting IT pros control the Cisco technology from within the CA management interface.

Separately from the Cisco partnership, CA is announcing a variety of software tools to manage virtual computing resources and cloud-based systems.

While 60% of CA's $4.2 billion business is related to the mainframe, the company is making cloud computing one of its main focuses, along with security, software-as-a-service-based IT management and virtualization management, says Tom Kendra, vice president of enterprise products and solutions.

Managing the new virtualization layer and cloud-based services is no easy task, in part because the technologies have been installed in addition to -- rather than replacements of -- existing IT infrastructure, and require a heterogeneous management approach, Kendra says..

CA's cloud strategy centers around a new "Cloud-Connected Management Suite" that includes four products. Those include Cloud Insight, for assessing how internal and external IT services relate to business priorities; Cloud Compose, for creating, deploying and managing composite services in a cloud; Cloud Optimize, which optimize use of both internal and external IT resources for cost and performance; and Cloud Orchestrate, which "will provide workflow control and policy-based automation of changes to service infrastructures."

CA also announced three virtualization products, which are Virtual Assurance, for monitoring, event correlation and fault and performance management in virtual environments; Virtual Automation, which provides automated self-service virtual machine life-cycle management; and Virtual Configuration, which manages sprawl, and tracks configuration changes to meet regulatory compliance and audit needs.

The products will start hitting the market in June, but CA did not offer more specific availability or pricing information.

In his keynote address at CA World 2010, Bill McCracken, chief executive officer of CA Technologies, told 7,000 attendees that the technology industry is at an inflection point, and that business will embrace virtualization and cloud computing in order to remain competitive.

"When economic conditions, technology advances, and customer needs align, transformation happens," said McCracken. "As we emerge from the global economic downturn, we have a tremendous opportunity to leap forward and embrace change, or risk being left behind."

McCracken also described a vision for how all businesses will evolve. "People still ask if I think the cloud is really going to happen. I say no; I don't think it's going to happen. I know it is going to happen because it is happening now. Virtualization and cloud computing will enable businesses to adapt to rapidly changing market and customer needs. We will be right there to help our customers gain a competitive advantage as this critical inflection point in our industry takes hold."

"Running IT in a cloud-connected enterprise will be more like running a supply chain, where organizations can tap into the IT services as needed - specifying when, where and precisely how they are delivered," he said. "This has never been more important, because business models no longer change every few years or even once a year. Cycles are increasingly shorter, which puts a whole new set of demands on the CIO and on the organization."

McCracken also spoke about the evolution of the company name from CA to CA Technologies.

"The name CA Technologies acknowledges our past and points to our future as a leader in delivering the technologies that will revolutionize the way IT powers business agility," said McCracken. "We are executing on a bold strategy, where IT resources -- from the cloud to the mainframe and everything in between -- are delivered with unprecedented levels of flexibility."

IT professionals and customers from around the world are attending CA World to get insights into what is happening in the IT management space and to learn how to best leverage CA Technologies to maximize their organization's IT capabilities. The user conference kicks off today and ends May 20.

In this newsletter, we are excited to announce Amazon CloudFront's access log feature is now enabled for streaming distributions - read more below. This month's newsletter also highlights AWS's global expansion with our new Singapore Region, Amazon VPC availability in Europe, and Amazon RDS availability in Northern California. In May and June, we have a full calendar of events taking place around the world and many virtual events, we hope you can join us.

Just Announced: Amazon CloudFront Access Logs For StreamingWe're excited to announce Amazon CloudFront's access log feature is now enabled for streaming distributions. Now, every time you stream a video using AWS’s easy to use content delivery service, you can capture a detailed record of your viewer’s activity. In addition, Amazon CloudFront will record the edge location serving the stream, the viewer's IP address, the number of bytes sent, and several other data elements. There are no additional charges for access logs, beyond normal Amazon S3 rates to write, store and access the logs. You can read more about this new feature. …

I saw a spate of recent articles that had some pretty amazing statistics and news bits on Amazon Web Services and competitors. In no particular order:

A survey of 600 developers by Mashery reported that 69% of respondents said Amazon, Google, and Twitter were the most popular API’s they were using.

Even the Federal Government is turning to the Amazon Cloud to save money. Sam Diaz reports the move of Recovery.gov will amount to hundreds of thousands of dollars. We found tremendous savings at Helpstream from our move to the Amazon Cloud.

Derrick Harris at GigaOm suggests its time for Amazon to roll out a PaaS to remain competitive. As an aside, are you as tired of all the “*aaS” acronyms as me? Are they helping us to understand anything better? BTW, I think the move Harris suggests would be the wrong move for Amazon because it would lead to them competiting with customers who are adding PaaS layers to Amazon. They should stay low-level and as language/OS agnostic as possible in order to remain as Switzerland. Let Heroku-like offerings be built on top of the Amazon infrastructure by others. Amazon doesn’t need to add a PaaS and they don’t need to add more value because they’re afraid of being commoditized. As we shall see below, they are the commoditizers everyone else needs to be afraid of.

Bob continues by citing more paeans to Amazon’s prowess in the IaaS market and concludes:

What does it all mean?

If nothing else, Amazon has a pretty amazing lead over other would-be Cloud competitors. And they’re building barriers to entry of several kinds:

Nobody but Amazon has the experience of running a Cloud service on this scale. They can’t help but be learning important things about how to do it well that potential competitors have yet to discover.

There is a growing community of developers whose Cloud education is all about Amazon. Software Developers as a group like to talk a good game about learning new things, but they also like being experts. When you ask them to drop their familiar tools and start from scratch with something new you take away their expert status. There will be a growing propensity among them to choose Amazon for new projects at new jobs simply because that is what they know.

Economies of Scale. Consider what kind of budget Amazon’s competitors have to pony up to build a competing Cloud infrastructure. A couple of small or medium-sized data centers won’t do it. Google already has tons of data centers, but many other companies that haven’t had much Cloud presence are faced with huge up front investments that grow larger day by day to catch up to Amazon.

Network effects. There is latency moving data in and out of the Cloud. It is not significant for individual users, but it is huge for entire applications. The challenge is to move all the data for an application during a maintenance window despite the latency issue. However, once the data is already in the Cloud, latency to other applications in the same Cloud is low. Hence data is accretive. The more data that goes into a particular Cloud, the more data wants to accrete to the same Cloud if the applications are at all interconnected.

It’s going to be interesting to watch it all unfold. It’s still relatively early days, but Amazon’s competitors need to rev up pretty soon. Amazon is stealing the Cloud at an ever-increasing rate.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.