Concurrent Requests with Futures:

You can also make concurrent requests easily:
(see "Advanced Concurrent HTTP Requests -- Embrace the Future" for detail)

a=[client.get('godfat'),client.get('cardinalblue')]puts"It's not blocking... but doing concurrent requests underneath"pa.map{|r|r['name']}# here we want the values, so it blocks hereputs"DONE"

Exception Handling for Futures:

Note that since the API call would only block whenever you're looking at
the response, it won't raise any exception at the time the API was called.
So if you want to block and handle the exception at the time API was called,
you would do something like this:

beginresponse=client.get('bad-user').tap{}# .tap{} is the pointdo_the_work(response)rescue=>eputs"Got an exception: #{e}"end

The trick here is forcing the future immediately give you the exact response,
so that rest-core could see the response and raise the exception. You can
call whatever methods on the future to force this behaviour, but since tap{}
is a method from Kernel (which is included in Object), it's always
available and would return the original value, so it is the easiest method
to be remembered and used.

If you know the response must be a string, then you can also use to_s.
Like this:

beginresponse=client.get('bad-user')response.class# simply force it to loaddo_the_work(response)rescue=>eputs"Got an exception: #{e}"end

The point is simply making a method call to force it load, whatever method
should work.

Concurrent Requests with Callbacks:

On the other hand, callback mode also available:

client.get('godfat'){|v|pv}puts"It's not blocking... but doing concurrent requests underneath"client.wait# we block here to wait for the request doneputs"DONE"

Exception Handling for Callbacks:

What about exception handling in callback mode? You know that we cannot
raise any exception in the case of using a callback. So rest-core would
pass the exception object into your callback. You can handle the exception
like this:

client.get('bad-user')do|response|ifresponse.kind_of?(Exception)puts"Got an exception: #{response}"elsedo_the_work(response)endendputs"It's not blocking... but doing concurrent requests underneath"client.wait# we block here to wait for the request doneputs"DONE"

Thread Pool / Connection Pool

Underneath, rest-core would spawn a thread for each request, freeing you
from blocking. However, occasionally we would not want this behaviour,
giving that we might have limited resource and cannot maximize performance.

For example, maybe we could not afford so many threads running concurrently,
or the target server cannot accept so many concurrent connections. In those
cases, we would want to have limited concurrent threads or connections.

YourClient.pool_size=10YourClient.pool_idle_time=60

This could set the thread pool size to 10, having a maximum of 10 threads
running together, growing from requests. Each threads idled more than 60
seconds would be shut down automatically.

Note that pool_size should at least be larger than 4, or it might be
very likely to have deadlock if you're using nested callbacks and having
a large number of concurrent calls.

Also, setting pool_size to -1 would mean we want to make blocking
requests, without spawning any threads. This might be useful for debugging.

Gracefully shutdown

To shutdown gracefully, consider shutdown the thread pool (if we're using it),
and wait for all requests for a given client. For example, suppose we're using
RC::Universal, we'll do this when we're shutting down:

RC::Universal.shutdown

We could put them in at_exit callback like this:

at_exitdoRC::Universal.shutdownend

If you're using unicorn, you probably want to put that in the config.

Random Asynchronous Tasks

Occasionally we might want to do some asynchronous tasks which could take
the advantage of the concurrency facilities inside rest-core, for example,
using wait and shutdown. You could do this with defer for a particular
client. Still take RC::Universal as an example:

Note that since the socket would be put inside RC::RESPONSE_SOCKET
instead of RC::RESPONSE_BODY, not all middleware would handle the socket.
In the case of hijacking, RC::RESPONSE_BODY would always be mapped to an
empty string, as it does not make sense to store the response in this case.

SSE (Server-Sent Events)

Not only JavaScript could receive server-sent events, any languages could.
Doing so would establish a keep-alive connection to the server, and receive
data periodically. We'll take Firebase as an example:

require'rest-core'# Streaming over 'users/tom.json'cl=RC::Universal.new(:site=>'https://SampleChat.firebaseIO-demo.com/')es=cl.event_source('users/tom.json',{},# this is query, none here:headers=>{'Accept'=>'text/event-stream'})@reconnect=truees.onopen{|sock|psock}# Called when connectedes.onmessage{|event,data,sock|pevent,data}# Called for each messagees.onerror{|error,sock|perror}# Called whenever there's an error# Extra: If we return true in onreconnect callback, it would automatically# reconnect the node for us if disconnected.es.onreconnect{|error,sock|perror;@reconnect}# Start making the requestes.start# Try to close the connection and see it reconnects automaticallyes.close# Update users/tom.jsonpcl.put('users/tom.json',RC::Json.encode(:some=>'data'))pcl.post('users/tom.json',RC::Json.encode(:some=>'other'))pcl.get('users/tom.json')pcl.delete('users/tom.json')# Need to tell onreconnect stops reconnecting, or even if we close# the connection manually, it would still try to reconnect again.@reconnect=false# Close the connection to gracefully shut it down.es.close

Those callbacks would be called in a separate background thread,
so we don't have to worry about blocking it. If we want to wait for
the connection to be closed, we could call wait:

es.wait# This would block until the connection is closed

More Control with request_full:

You can also use request_full to retrieve everything including response
status, response headers, and also other rest-core options. But since using
this interface is like using Rack directly, you have to build the env
manually. To help you build the env manually, everything has a default,
including the path.

client.request_full({})[RC::RESPONSE_BODY]# {"message"=>"Not Found"}# This would print something like this:# RestCore: spent 1.135713 Requested GET https://api.github.com/users/client.request_full(RC::REQUEST_PATH=>'godfat')[RC::RESPONSE_STATUS]client.request_full(RC::REQUEST_PATH=>'godfat')[RC::RESPONSE_HEADERS]# Headers are normalized with all upper cases and# dashes are replaced by underscores.# To make POST (or any other request methods) request:client.request_full(RC::REQUEST_PATH=>'godfat',RC::REQUEST_METHOD=>:post)[RC::RESPONSE_STATUS]# 404

List of built-in Middleware:

where cache is the cache store which the cache data would be storing to.
expires_in would be passed to
cache.store(key, value :expires_in => expires_in) if store method is
available and its arity should be at least 3. The interface to the cache
could be referenced from moneta, namely:

Note that {:expires_in => seconds} would be passed as the options in
store(key, value, options), and a plain old Ruby hash {} satisfies
the mandatory requirements: [](key) and []=(key, value), but not the
last one for :expires_in because the store method for Hash did not take
the third argument. That means we could use {} as the cache but it would
simply ignore :expires_in.

Advanced Concurrent HTTP Requests -- Embrace the Future

The Interface

There are a number of different ways to make concurrent requests in
rest-core. They could be roughly categorized to two different forms.
One is using the well known callbacks, while the other one is using
through a technique called future. Basically, it means it would
return you a promise, which would eventually become the real value
(response here) you were asking for whenever you really want it.
Otherwise, the program keeps running until the value is evaluated,
and blocks there if the computation (response) hasn't been done yet.
If the computation is already done, then it would simply return you
the result.

Here's a very simple example for using futures:

require'rest-core'YourClient=RC::Builder.clientdouseRC::DefaultSite,'https://api.github.com/users/'useRC::JsonResponse,trueuseRC::CommonLogger,method(:puts)endclient=YourClient.newputs"httpclient with threads doing concurrent requests"a=[client.get('godfat'),client.get('cardinalblue')]puts"It's not blocking... but doing concurrent requests underneath"pa.map{|r|r['name']}# here we want the values, so it blocks hereputs"DONE"

And here's a corresponded version for using callbacks:

require'rest-core'YourClient=RC::Builder.clientdouseRC::DefaultSite,'https://api.github.com/users/'useRC::JsonResponse,trueuseRC::CommonLogger,method(:puts)endclient=YourClient.newputs"httpclient with threads doing concurrent requests"client.get('godfat'){|v|pv['name']}.get('cardinalblue'){|v|pv['name']}puts"It's not blocking... but doing concurrent requests underneath"client.wait# until all requests are doneputs"DONE"

You can pick whatever works for you.

A full runnable example is at: example/simple.rb. If you want to know
all the possible use cases, you can also see: example/use-cases.rb. It's
also served as a test for each possible combinations, so it's quite complex
and complete.

Configure the underlying HTTP engine

Occasionally we might want to configure the underlying HTTP engine, which
for now is httpclient. For example, we might not want to decompress
gzip automatically, (rest-core configures httpclient to request and
decompress gzip automatically). or we might want to skip verifying SSL
in some situation. (e.g. making requests against a self-signed testing server)

In such cases, we could use config_engine option to configure the underlying
engine. This could be set with request based, client instance based, or
client class based. Please refer to:
How We Pick the Default Value,
except that there's no middleware for config_engine.

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.