First, it indicates there's a fair amount of non-trivial work for your regular Java application developer. Second, this raises as many questions as it answers - in particular, if you have a lot of information, how do you distribute it across a grid and then how do you integrate a transaction with backend databases and other stores.

Pat's statement of the problem is brilliant; but his solution would mean that application programmers would end up doing lots of infrastructure work, which in my experience is a no-no. Surely the better answer is to productise this infrastructure functionality, so application developers have a simple sandbox and can quickly deliver business results.

- Memory and CPU's will become cheaper- More memory and more cores- The bottleneck of access times to hard disks is going to get 10x worse, which will mean they are gradually phased out for live data- Flash memory will take over mainstream applications for storage sizes > main memory. But how many writes can you get out of them...

4. Stanford's Case for RAMClouds. RAMClouds means 'all active data in memory rather than on disk'

By the time it was published, RAMClouds wasn't new ... but it does tie the previous paper into forward thinking about architecture, and gives a theoretical reasoning as to why RAMClouds will be one of the new architectures.

Basically, applications will continue to get larger. A million on-line users isn't worth shouting about today. This is the case for thinking about application architectures that will survive the next 10 years - there are going to be loads of customers out there wanting information now.

The big thing developers have trouble getting their head round, is that in a scalable system every failure event must be handled as part of the application. Most developers are used to letting ops worry about failure modes. It's really hard in a large-scale distributed environment to get this right.

8. How to distribute data for application programmers: partitioning and the entity group pattern. This answers the question, "how do I spread across nodes for best performance but easy management".

The thrust of NoSQL (or 'not only SQL') is: if you really want to get scalable data, you can't have SQL and ACID charateristics. And there are certainly beyond SQL databases like BigTable that have highly specialised characteristics.

In CloudTran we provide transactionality that can provide transactionality for SQL and no SQL, coordinating in-memory data with eventual consistency at the data sources. Some of SQL functionalities for joins has to be done by hand, but it's about 90% there.