For the download side you could get the response body as a stream using the method http_response::body(). Then you can read the bytes in whatever size chunks you would like and compare with the ContentLength, if it was included. Would this work?

The stream solution would be a temporary solution, I think. What I would suggest doing, is to leverage the PPL
asynchronous agents library by allowing you to get a progress data source from the request and a separate one from the response (you'd have to wait for the headers to come in
before hooking up the target).

Alternatively, to simplify it further (at the expense of generality), we could just allow you to hook a callback function object to the request/response.

Looking into generic solution for all users of this library. I would vote for alternative solution.
If it will be provided from library as given below or more precisely like other libraries are providing.

(I have used other HTTP client libraries in my previous project . Like libcurl. )

So, it remain part of the library to get the WEBDAV GET/PUT progress info . Mostly, library provides callback with some parameters like following:

Upload: Total bytes to upload, total bytes done so far.

The callback will hit for each chunk and provides above info.

Similarly for download.

Download: Total bytes to download, total bytes downloaded so far.

I am using C++ REST SDK as well as BackgruondUploader(WinRT class) and BackgroundDownloader (WinRT class) in my metro app. I am able to get progress info from WinRT classes though unable to get it from Casablanca.

A single callback, attached to the request object (can't use the convenience versions of http_client::request()), which takes a function object with the following profile:

void progress_handler(bool up, size_t processed_so_far)
{
...
}

Since there may not be a known content length, it would be the application's responsibility to understand how much of the overall body the 'so-far' amount represents. The callback, if set, would be called at least once, even for zero-sized bodies. The only
situation in which it would not get called is when an exception (not an HTTP error message, a C++ exception) is generated in the processing -- a network error, for example.

1) Yes.
2) The library usually knows less than the application about the size of the content, which is why I thinks it makes sense to not have the library guess what percentage the number represents. I don't know about WebDAV, but in general HTTP terms, the Content-Length
header is not always guaranteed to be there. If it is, then that is how you would get the total size. Streaming scenarios, for example, don't usually have a length. If so, there's nothing either the application or the library can do.

It would definitely be the raw number of bytes loaded (up or down) so far, not the % (that was my point before -- the library won't know any better than the application).

Sure, the size could be uint64_t or something like that, instead. Of course, if you're up- or down-loading something bigger than 4GB, you may want to consider using something a bit more sophisticated than a simple HTTP request (something that can restart where
it left off, for example).

No, unfortunately, I can't promise that. The next release is imminent, the bits have been cut and we're just waiting for a few formalities before posting it. This change is straight-forward, though, and I already have it prototyped, so it should be in
the one release after that.

Going forward, it is our intent (let's see if we can deliver on that intent) to release updates much more frequently. Lately, we have been working on getting the Casablanca functionality into Visual Studio, so we've been busy with that, but we should be able
to get bits out on CodePlex more often now.

And while downloading, is it like -
Response fetches the entire object and then we can read from response in chunks?
or is it also possible that we can have even response as a small chunk of object to be downloaded?

project that i am working on right now, needs to download the very large objects (may be even terra bytes) in chunks of 16 MB. So, what i am looking for is to have the response itself of size 16 MB. Is it possible with casablanca?

Yes the progress handler feature is available in the latest release. Take a look at http_request::set_progress_handler. One thing to keep in mind this feature only helps notify you about the chunks that have been uploaded/downloaded. In your case you need
the actual data chunks of the response, which already was possible without this feature.

It sounds like you have two options. One is to make a basic GET request and read the response in 16 MB chunks as it is streamed from the server. This can be done using the http_response::body() method which returns a stream to the underlying HTTP response body.
A second option, instead of trying to get the whole object all at once, is to request individual chunks of it at a time. This can be done using the
Range request HTTP header. You could make requests in 16 MB chunks as needed. If the data you are retrieving is truly large, using the Range header can be much more reliable especially in the presence of connection issues.