In my previous post, I discussed the server side implementation of lazy requests / Multi GET. The ability to submit several requests to the server in a single round trip. RavenDB has always supported the ability to perform multiple write operations in a single batch, but now we have the reverse functionality the ability to make several reads at one. (The natural progression, the ability to make several read/write operations in a single batch will not be supported).

As it turned out, it was actually pretty hard to do, and require us to do some pretty deep refactoring to the way we were executing our requests, but in the end it is here and it is working. Here are a few examples of how it looks line from the client API point of view:

And up until now, there is actually nothing being sent to the server. The result of those two calls are Lazy<User> and Lazy<IEnumerable<Post>>, respectively.

The reason that we are using Lazy<T> in this manner is that we want to make it very explicit when you are actually evaluating the lazy request. All the lazy requests will be sent to the server in a single roundtrip the first time that any of the lazy instances are evaluated, or you can force this to happen by calling:

session.Advanced.Eagerly.ExecuteAllPendingLazyOperations();

In order to increase the speed even more, on the server side, we are actually going to process all of those queries in parallel. So you get several factors that would speed up your application:

Only a single remote call is made.

All of the queries are actually handled in parallel on the server side.

And the best part, you usually don’t really have to think about it. If you use the Lazy API, it just works .

Comments

There is a way from the client to enable/disable the parallel execution of batched lazy requests? I'm thinking about a classic "insert doc" -> "get latst 5 doc" flow: i would be able to use the lazy evaluation, eventually, but i also would like to mantain the order of the single requests.

I detest the fluent naming style very much. It does not even help with reading. Reminds me of the early attempts to make managers able to read code 30 years ago by using natural language syntax. Never works, never helps. Just goes against existing naming conventions.

Executing queries in parallel on the server... That might actually be slower than executing them in sequence, due to I/O. If, say, 4 queries are executed in parallel and they touch data in multiple places on disk, it will take the HDD several step actions to fulfill the parallel queries, as it has to step back and forth. This is slow. doing the queries in sequence actually might be faster, as the HDD then doesn't have to step that often.

I'm curious about why the two different techniques were chosen for the two scenarios in the examples given. Is it a technical limitation (inordinate difficulty of implementing Lazily for Load, maybe?) that prevents the .Lazily() from being the one API to rule them all? Would've aligned it better with PLINQ's .AsParallel(), AsOrdered(), etc. usage.

Walter,
The problem is that we tried, and it turns out that there aren't any queries that we can do this on using our architecture.
It would requires us to do pull things together that are currently separated.
You can look at what Raccoon Blog is doing to see how this works