> > Honestly, the concept doesn't make sense to me.> > > > Scaling is something that usually requires extensive> > refactoring of one's applications. A virtualization> layer> > can't do that. > > >> Once your program runs on three machines...

I think the point is that this is the hard part. Once you've done that what benefit does virtualization have over having more servers, from a system design perspective?

The other thing about this is that we have long been able to run multiple application servers on a single machine. This is actually a recommended configuration for clusters (multiple instances on multiple machines at multiple locations.)

Virtualization seems like overblown hype to me. I can absolutely see why it's something that's useful for infrastructure but I don't see how it has any impact on software design other than you can perhaps assume that your app will never have to run concurrently with another instance of itself. Is that the point here?

I think it's true that the question is a little ill-formed. Obviously virtualization isn't a scalability solution but it is a superb server provisioning solution.

That doesn't mean that virtualization offers no advantages over scaling by installing new hardware. We live in a real world, not a hypothetical one. In this flawed real world I recently had one of those nasty epiphanys when a manager asked "Is the Staging server for application X ready to hand over yet?""What Staging server?""The one you guys were building for the release team to use this afternoon""Let me check on that" I say, scurrying furiously.In a few minutes it was clear that due to miscommunciation a server hadn't been purchased or built.

Luckily we had been deploying Redhat VMs on Dell & IBM servers for a few months. It took 65 minutes to deploy a new Linux VM, configure it, install the necessary infrastructure and applications and hand it over. If we had to purchase a host it would have been a three week process and would have spent perhaps two person days justifying and debating the purchase.

I think that virtualization is the bee's knees. But like Tivo, you have to use it to really see the benefits. I expect that in ten years time there will be no medium sized firms operating datacenters because the economics of using EC2 or an equivalent will be so obvious to all.

First, to understand the concept of Application Virtualization we can't think about the OS and server virtualization. While some of the benefits are similar, the approach and overall value proposition is much different.

Application Virtualization is the complete abstraction and isolation of the application from the operating system and hardware into a container. This decoupling allows for the application to aggregate compute resources across the data center, run anywhere at any time based on policy, demand, and resource constraints. Application Owners can now control the policies that govern resource allocation and provisioning of additional application capacity to ensure service quality during times of planned or unplanned downtime.

The key is the visibility about the application's performance at any given moment, not only that the CPU was pegged at 100% for 12 minutes - this doesn't mean much. Looking at statistics like queue length, thread counts, response times, memory usage, etc. and correlating these and other metrics through policy-based automation make this approach distinct - no manual intervention is involved in any of these steps.

So, now that we understand the approach let's talk about scalability. Using Application Virtualization in this context removes much of the burden on a developer to write a flawless application that will scale. Tools like those from Mercury (now HP) i.e. LoadRunner have assisted developers for years in assuring their applications would scale regardless of the number of users or transactions. In this New World of Web 2.0 and the evolution of the enterprise application, there must be other means of scaling. Our approach to Application Virtualization has the intelligence to evaluate performance and availability in real-time, at runtime, to expand and contract application capacity to meet fluxuations in demand. For example, let's say I am eBay during Christmas time, I am likely to have more users and transactions than I would in July (a time of business slow down for the summer). Using this approach to Application Virtualization, it doesn't matter if the applications scales. Why? Because we would be able to spin up more instances of the application within a cluster or even if there is no cluster. We update the load balancer(s) that there are more instances of the application to send workload too. Once demand goes back down to a normal level, we would quiesce the application servers and go back to a normal configuration state. This is truly autonomic computing, thus removing the need to manually configure anything. Whalaa! Service quality, reliability, and scalability were met despite huge spikes in demand.

> So, now that we understand the approach let's talk about> scalability. Using Application Virtualization in this> context removes much of the burden on a developer to write> a flawless application that will scale.

But this is not the case as far as I can tell. If your application can't run concurrently on two servers, it's not going to run on two virtualized servers.

>But this is not the case as far as I can tell. If your >application can't run concurrently on two servers, it's >not going to run on two virtualized servers.

You are still thinking in terms of OS or server virtualization. It doesn't matter if the application is running on a physical or logical (virtual) OS. By decoupling the application from dedicated hardware, we allow the application to run anywhere at any time - concurrently or not. In fact, we allow organizations that have a heterogeneous environment to run WebLogic and WebSphere on the same machines or Business Objects and Cognos on the same machine, if required. While this may not be a typical configuration, many organizations want to do many different things. With FabricServer (the technology being discussed here) we aren't in the execution path of the application, thus no changes are required to the application to reap these and other benefits.

> >But this is not the case as far as I can tell. If your> >application can't run concurrently on two servers, it's> >not going to run on two virtualized servers. > > You are still thinking in terms of OS or server> virtualization.

No, I am not.

> It doesn't matter if the application is> running on a physical or logical (virtual) OS.

That's my point.

> By> decoupling the application from dedicated hardware, we> allow the application to run anywhere at any time -> concurrently or not.

No it doesn't. Real enterprise applications do not run in a vacuum. It's very easy to write a program that assumes it is the only thing updating the database. I just finished integrating with a very popular SaaS product that doesn't lock records that are being modified by users (optimistically or pessimistically) and fails concurrent updates to related objects with "something happened" error messages.

If an application cannot be (safely) run concurrently on two (separate, individual) servers, how is virtualization going to make the application able to do that?

> > By> > decoupling the application from dedicated hardware, we> > allow the application to run anywhere at any time -> > concurrently or not.> > No it doesn't. Real enterprise applications do not run in> a vacuum. It's very easy to write a program that assumes> it is the only thing updating the database. I just> finished integrating with a very popular SaaS product that> doesn't lock records that are being modified by users> (optimistically or pessimistically) and fails concurrent> updates to related objects with "something happened" error> messages.> > If an application cannot be (safely) run concurrently on> two (separate, individual) servers, how is virtualization> going to make the application able to do that?

Virtualization - OS or Application - is not going to do anything for an application that is not designed (intentionally or otherwise) to be anything more than stand-alone. You are quite right in this regard. So, Application Virtualization is not a panacea that can be applied to *all* applications. It certainly will not remove the onus of developing a sound application which is, by design, built to scale.

For those applications that are designed and built (appropriately!) to scale, Application Virtualization provides the immediate benefit of allowing them to scale - e.g. deliver on the SLAs - without *explicitly* allocating a specific number of servers (physical or logical). In other words, the application is decoupled from the server on which it will execute. In this context, when Shayne (previoulsy) stated that Application Virtualization "removes much of the burden on a developer to write a flawless application that will scale", it was meant to imply that much of the load testing that goes into determining the number of instances that need to be available at expected peak loads is simply not necessary. Instead, Application Virtualization allows the developer to determine the *minimum* number of instances that need to be available to support _normal_ (steady-state) operations, and to specify through policy the maximum number of instances it should grow to, along with the conditions that will cause the addition (or removal) of instances. This is done automatically at run-time with no static allocation of resources. The results are faster time to market, more efficient use of server (physical or logical) resources, and an ability guarantee that the applications Most Important to the Business are available all the time, and that the SLAs for these applications are met or exceeded.

> In this context, when Shayne> (previoulsy) stated that Application Virtualization> "removes much of the burden on a developer to write a> flawless application that will scale", it was meant to> imply that much of the load testing that goes into> determining the number of instances that need to be> available at expected peak loads is simply not necessary.

I assumed this is what he was what was meant but it's not what that statement implies. The statement implies that a 'flawed' application with respect to scaling will magically scale properly when virtualized. I think a lot of people making purchasing decisions will either not assume the qualifications you make in your post or not understand that there is such a distinction at all.

The short of it is that virtualization is not going to make applications scale if that application cannot not scaled without virtualization. Agreed? If we do, I think making statements that imply this is the case are misleading.