Tuesday, July 21, 2009

Followup on an interesting question being asked in the Cloud Computing Forum.

Definition of Scalability

"Scalability" is the ability to meet an increasing workload requirement by incrementally adding a proportional amount of resources capacity. Therefore, Scalability is quantified by a measure of "linearity", which maintains a constant ratio between resource : workload. This linearity allows the enterprise to plan for its resource budget according to their foreseeable workload growth.

In reality, the range of linearity is not infinite. It is important to understand where the linearity reaches its limit and whether the workload will growth beyond such limit.

A "scalable" design can push the limit of such linearity very far. Far enough that you don't need to worry about the workload ever get beyond that. Scalability architecture is typically built on a repeatable basic infrastructure with minimal coupling such that growth can be achieved by simply repeatedly buying the same set of basic hardware.

Traditionally, people design for scalability to make sure that their operation cost can grow linearly with respect to the workload. They usually don't worry about "removing" resources, nor it worries about whether the resource is fully utilized, because the purchased resources is already a sunk cost. At least for now, "Scalability" is not about managing fluctuating workload and also doesn't have a time dimension. But I expect such mindset will change as people move to Cloud computing when operation cost depends on usage. There are financial motivations to decommission resource once they are no longer used.

Many of existing distributed applications have a very scalable architecture, but very few of them are design with extreme elasticity in mind. Usually they expect the underlying infrastructure are relatively stable with low churn rate (machine crashes are the only cause of infrastructure changes, which is relatively infrequent), therefore they are willing to pay a higher degradation when the churn happens.

Definition of Elasticity

On the other hand, "elasticity" is the ability to instantly commission and decommission large amount of resource capacity on the fly, and then charge purely based on the actual resource usage. "Elasticity" is quantitatively measured by

Speed of commissioning / decommissioning

Max amount of resource can be brought in

Granularity of usage accounting

It is important to understand that even you get "elasticity" automatically when your application is deployed in the cloud, you DO NOT get "scalability" (which is a characteristic of your application architecture). A non-scalable in-house application deployed into the cloud won't magically becomes scalable. Instead, you will just receive a huge bill from your cloud provider at the month end. "Elasticity" is a double edge sword, it lets bad things happen fast.

"Elasticity" is a pretty new feature to our industry and we are in the process of discovering how we can fully utilize it. Most of existing distributed applications are built with scalability but not extreme elasticity in mind. I think an ideal app should have both "scalability" and "elasticity" built in, but I haven't seen much yet. Therefore, although most of existing applications can run in the cloud without modification, I don't think they have fully exploit the benefits of the cloud yet.

"Statefulness" and "Elasticity" are fundamentally in conflict with each other. Data transfer rate has put a limit of the speed of commissioning/decommissioning, which is a measure of elasticity. In other words, I believe smart data distribution algorithm can reduce this friction by reducing the amount of data transfer during commissioning. This is an area of research that I'd love to see.

New Design Paradigm for Cloud Applications

In order to fully exploit the benefits of the cloud, it is necessary to have a new set of design considerations, with both scalability and elasticity in mind. There are many coverages in how to do a scalable design, which I am not trying to repeat here. Lets focus more on "design for elasticity".

Make your application as stateless as possible (but don't just dump all state somewhere else)

Use a smart data migration / balancing algorithm which minimize data movement when the infrastructure grow and shrink elastically. A typical way to do this is to use asynchronous data transfer so that your application can serve immediately (even before the completion of data transfer)

Use a smart scheduler that control the growth/shrink of the environment. Such scheduling algorithm takes the usage charge and performance impact during the dust settlement period into consideration to decide whether a change of the environment should be initiated. Underlying, there is also a monitoring mechanism in place that collects the necessary metrics, perform the analytics and feedback the result to the scheduler to fine tune its earlier policy decision.

Thursday, July 9, 2009

There is recently a lot discussion between various Non-SQL model which relax data consistency for high scalability. There is a recent meetup in San Francisco about this.

Various model is discussed, from Memcached, Distributed Hashtable, Column-oriented data store with corresponding data access model. Many of the Non-SQL stores focus in scalability

Partition data across lots of machines

Each machine operate independently (ie: minimal co-ordination with each other)

With such distributed environment, providing tight data integrity is a huge challenge, therefore some degree of consistency is lost. (but depends on the implementation, what is being lost is not the same). I seriously doubt that it is feasible to add data partitioning to traditional RDBMS without changing its data integrity guarantee.

From a data access pattern perspective, memcache represents one end of the spectrum where the access is ad/hoc, towards a small subset of a large data set, and user to data collocation is completely unpredictable. It presents the biggest challenge to ACID guarantee. Of course, one can theorectically use 2PC, PAXOs protocols but no one is using that because it will be too slow. Whenever one is using Memcached, then the application must have a pretty relaxed data consistency requirement.

At the other end of the spectrum, if we can "confine" the data access model, then we can build a highly specialized data layout plan and with careful execution plan, we can even eliminating the concurrency scenario. Under this approach, very high scalability can be achieved. Map/Reduce is a typical example in this approach. The challenge is that an extra algorithm translation step is required to rearrange the application into the confined MapReduce model.

Nevertheless, I don't think it is too meaningful to compare different DB models without looking at different dimensions which this thread hasn't discussed.