Is there any fundmental difference in the way applications/servies hosted on a cloud are to be tested vis-a-vis a traditional web based application ? It looks essentially the same to me except the way these applciations manage large loads (elastic clouds) and high availibilty.

Testing "Cloud" applications should be tested as though you would test any existing web application with a few added test cases to test the additional "Cloud" features, if you application has them:

Dynamic scaling You would want to test that the application can dynamically scale up and down, with no loss of data and end user connectivity.

Automated provisioning For apps that provision new services automatically when a new user signs up, you would want to test this process, as well as the reverse when a user leaves the service.

Device Synchronisation If the service is like Dropbox, or iCloud, then there may be device to device synchronisation issues that need to be tested, particularly recovery situations when a sync is disrupted and incomplete and needs to be restarted.

No if you have created virtual machines on the cloud and have moved your application onto those virtual machines and placed these behind the a load balancer. You might choose to check that the load balancer functions as expected, and the performance when you place you application under load to ensure you have the correct setting in place on the load balancer.

Yes if you have embraced the cloud technologies and built your application to deliver high availability and scalability. Your application components will be distributed across machines and you will find that testing a distributed system is best done by testing each of the components first in isolation through unit tests and then a smaller set of integrated tests to prove that the deployed application works as intended.

Assuming that a 'regular' web application is one that runs on bare metal or a virtualized infrastructure with tight SLAs (two nines and better) and doesn't requires scale or elasticity, then yes there is a big difference.

Cloud VMs have an MTBF measured in months, so all components that your application depends on, computes, networks, storage, are error prone. Testing a cloud application thus becomes the problem of testing if your application behaves properly when any of these components fails. For the application to intelligently respond to failures cross-cuts into its functionality so fundamentally, cloud applications need to be tested differently.

The second big difference injected by the highly unreliable components (but highly reliable infrastructure: please note that difference) is that there needs to be automation to deploy these components. That automation is integral to the functionality of the application and can exhibit very complex 'convergence' behavior. You suddenly need to test for idempotency of configuration processes, or forward progress on convergence. Those are attributes that small fixed asset applications never had to include in their operational QA.

So, yes, big difference in QA approach between cloud applications and non-cloud applications. And clouds can be private or public, it is the MTBF characteristics of the components, elasticity, and scale that make the difference. Scalability is different from scale, and scalability does introduce a new functionality requirement, but testing for it is very different if you have 50 servers or 5000 servers.