How is a private cloud different from traditional use of middleware technology? The following scenarios demonstrate the key differences in rapid enablement and self service private cloud provides.

The Risk Management organization in a large enterprise is looking to develop a new web based application that will re-use several existing internal services. The development Team Lead establishes a Tenant account with the centralized infrastructure team. His development team begins local development on their workstations and after a few weeks they have created a basic skeleton of the application.

Using a self-service web portal, the Team Lead indicates that his team requires a WebLogic 11g based development environment with 1GB heap and 50GB of database space for the next three months. That afternoon, he receives an e-mail that his environment is ready with the appropriate URL/credentials for the Admin Server and database, and filepath for the server logs.

One of the developers uploads the skeleton application EAR to the cloud instance remotely via Eclipse and builds the required table structures. After another month of initial testing, the application is in good shape for some basic user testing.

The Team Lead returns to the self service portal and requests that his development cloud be promoted to UAT status. That afternoon, he receives another e-mail with the corresponding credentials and URLs for an exact copy of his development cloud now with a 3 node WebLogic cluster using 2GB heaps and 100GB of tablespace. After a quick unit testing with JUnit the development team confirms the UAT cloud is ready for user testing. The Team Lead sends out an invitation to his QA group and several testers begin stressing out the application.

As bugs are found, the development team adjusts the application using their development cloud. Updated EARs are remotely deployed to the UAT cloud via batch scripts from the source control system. After several weeks the application has passed enough tests to be ready for production usage.

The Team Leader returns again to the self service portal and requests that his UAT cloud should be promoted to a Production cloud with 4 node WebLogic cluster using 3GB heaps and 250GB of database space. After a 24 hour review period he receives an email that the Production cloud has been created with specific information about URLs and guest user credentials.

The development team is then re-focused on another high priority application. The original development and UAT clouds are purged and the server capacity is returned to the available resource pool.

After a few weeks of production usage, it becomes clear there is a subtle logic error in one of the calculations inside a web page. The Team Leader review the bug notice and decides it warrants a code change. The development team makes the appropriate change on their workstation. The UAT cloud is re-provisioned for a few days so the QA team can verify the bug has been fixed. Once certified, the fixed EAR is deployed into the production cloud and the UAT cloud is de-provisioned yet again.

A cloud Administrator monitors the production cloud and notices that one of the managed nodes in the cluster is responding very slowly to user requests. This is determined by looking at the average response times from the “top servlets” as identified by their request counts. The Administrator restarts the node remotely and after several minutes it is serving requests normally again.

Several months thereafter the business conditions change such that the application is handling hundreds more users than expected. The cloud Administrators notice that the production cloud instance is consuming near its limits of allocated resources. They decide to add additional WebLogic nodes to the production cloud cluster on another server that has additional memory and CPU cores. This happens without impact to the cloud Customers and the Team Lead is notified of the change.

Later that week it is apparent that the spike in usage was only a temporary phenomenon. The cloud Administrators then decide to remove the fourth cluster node from the production cloud and return the second server back into the resource pool.

After a few years of production usage, the business rolls out a packaged application which provides similar functionality as the custom application. The users are instructed to begin using the packaged application solution and over time the production cloud instance capacity is scaled back until it is simply running a single node WebLogic cluster on a modest server. The larger capacity hardware it was using is again returned back to a common pool for other cloud instances. Eventually the application is no longer needed and the production cloud instance is purged.

Note that the scenario presented above highlights several key aspects of a Platform as a Service:

Cloud Promotion- moving the application through the traditional lifecycle stages was through the idea of promotion. The cloud itself took care of cloning the previous stage into the new server capacity. This reduced complexity for the Tenant and helped ensure consistency of configuration across stages.

Application Decoupled from Capacity- the hardware capacity beneath the application could be easily adjusted as the business needs shifted. This happened “behind the scenes” without Tenant intervention. More importantly it didn’t require any re-architecting or involvement by the development team.

Better Resource Allocation- because of the self service and decoupled nature of the cloud, the actual servers could be managed as pooled resources. Additional server capacity can be added when necessary and spare capacity can be better utilized where needed.