Contents

So far all the RPC method calls that have been implemented here have been "fast", meaning that their execution has not depended on an access to slow services or external resources. In real life, however, it is quite likely that some services cannot be provided immediately, but have to wait for some external service to complete, before completing the method call.

The GLib wrappers provide a version of making method calls, where the call is launched (almost) immediately, and a callback is executed when the method call returns (either with a return value or an error).

Using the asynchronous wrappers is important when the program needs to update some kind of status or be reactive to the user (via a GUI or other interface). Otherwise, the program would block waiting for the RPC method to return, and would not be able to refresh the GUI or screen when required. An alternative solution is to use separate threads that run the synchronous methods. However, if this solution is used, synchronization between threads can become an issue, and debugging threaded programs is much harder than debugging single-threaded ones. Furthermore, the implementation of threads might be suboptimal in some environments. For these reasons, the thread scenario is not covered in this guide.

Slow running RPC methods are simulated here by adding a delay into the server method implementations, so that it becomes clear why asynchronous RPC mechanisms are important. As signals, by their nature, are asynchronous as well, they do not add anything to this example now. In order to simplify the code listings, the signal support from the asynchronous clients is dropped here, even though the server still contains them and emits them.

The only change on the server side is the addition of delays into each of the RPC methods (setvalue1, setvalue2, getvalue1 and getvalue2). This delay is added to the start of each function as follows: glib-dbus-async/server.c

/* How many microseconds to delay between each client operation. */#define SERVER_DELAY_USEC (5*1000000UL)/*... Listing cut for brevity ...*/
gboolean value_object_setvalue1(ValueObject* obj, gint valueIn,
GError** error){
dbg("Called (valueIn=%d)", valueIn);
g_assert(obj != NULL);
dbg("Delaying operation");
g_usleep(SERVER_DELAY_USEC);/* Compare the current value against old one. If they are the same,
we do not need to do anything (except return success). */if(obj->value1 != valueIn){

Building the server is done as before, but we notice the delay when we call an RPC method:

In the example above, the time shell built-in command was used. It runs the given command while measuring the wall clock time (the real time) and the time used while executing the code and system calls. In this case, only the real time is of any interest. The method call delays for about 5 seconds, as it should. The delay (even when given with microsecond resolution) is always approximate, and longer than the requested amount. Exact delay depends on many factors, most of which cannot be directly influenced.

The next experiment deals with a likely scenario, where another method call comes along while the first one is still being executed. This is best tested by just repeating the sending command twice, but running the first one on the background (so that the shell does not wait for it to complete first). The server is still running on the background from the previous test:

What can be seen from the above output is that the first client is delayed for about 5 seconds, while the second client (which was launched shortly after the first) is already delayed by a much longer period. This is to be expected, as the server can only process one request at a time and delays each request by 5 seconds.

The clients must be able to continue their "normal work" while they wait for the response from the server. Since this is just example code, "normal work" for the clients is waiting for the response while blocking on incoming events (converted into callbacks). However, if the example programs are graphical, the asynchronous approach make it possible for them to react to user input. D-Bus by itself does not support cancellation of method calls, once processing has started on the server side, so adding cancellation support requires a separate method call to the server. Since the server only handles one operation at a time, the current server cannot support method call cancellations at all.

When the glib-bindings-tool is run, it already generates the necessary wrapping stubs to support launching asynchronous method calls. What is then left to do is implementing the callback functions correctly, processing the return errors and launching the method call. glib-dbus-async/client-stubs.c

/* Pull in the client stubs that were generated with
dbus-binding-tool */#include "value-client-stub.h"

The client has been simplified, so that it now only operates on value1. The callback that is called from the stub code is presented next: glib-dbus-async/client-stubs.c

/**
* This function is called when the async setvalue1 either
* completes, timeouts or fails (our server however does not signal
* errors, but the client D-Bus library can do that). When this example
* program is left running for a while, you can see all three cases.
*
* The prototype must match the one generated by the dbus-binding-tool
* (org_maemo_Value_setvalue1_reply).
*
* Because there is no return value from the RPC, the only useful
* parameter that we get is the error object, which we check.
* If error is NULL, that means no error. Otherwise the RPC call
* failed and we should check what the cause was.
*/staticvoid setValue1Completed(DBusGProxy* proxy, GError *error,
gpointer userData){
g_print(PROGNAME ":%s:setValue1Completed\n", timestamp());if(error != NULL){
g_printerr(PROGNAME " ERROR: %s\n", error->message);/* We need to release the error object because the stub code does
not do it automatically. */
g_error_free(error);}else{
g_print(PROGNAME " SUCCESS\n");}}

Because the method call does not return any data, the parameters for the callback are at minimum (those three are always received). Handling errors must be performed within the callback, because errors could be delayed from the server, and not visible immediately at launch time.

Note that the callback does not terminate the program on errors. This is done on purpose in order to demonstrate some common asynchronous problems below. The timestamp function is a small utility function to return a pointer to a string, representing the number of seconds because the program started (useful to visualize the order of the different asynchronous events below). glib-dbus-async/client-stubs.c

/**
* This function is called repeatedly from within the mainloop
* timer launch code.
*
* It launches asynchronous RPC method to set value1 with ever
* increasing argument.
*/static gboolean timerCallback(DBusGProxy* remoteobj){/* Local value that we'll start updating to the remote object. */static gint localValue1 =-80;/* Start the RPC.
This is done by calling the stub function that takes the new
value and the callback function to call on reply getting back.
The stub returns a DBusGProxyCall object, but we do not need it
so we ignore the return value. The return value could be used
to cancel a pending request (from client side) with
dbus_g_proxy_cancel_call. We could also pass a pointer to
user-data (last parameter), but we do not need one in this example.
It is normally used to "carry around" the application state.
*/
g_print(PROGNAME ":%s:timerCallback launching setvalue1\n",
timestamp());
org_maemo_Value_setvalue1_async(remoteobj, localValue1,
setValue1Completed, NULL);
g_print(PROGNAME ":%s:timerCallback setvalue1 launched\n",
timestamp());/* Step the local value forward. */
localValue1 +=10;/* Repeat timer later. */return TRUE;}

Using the stub code is rather simple. For each generated synchronous version of a method wrapper, there is also the a _async version of the call. The main difference with the parameters is the removal of the GError pointer (because errors are handled in the callback), and the addition of the callback function to use when the method completes, times out or encounters an error.

The main function remains the same from the previous client examples (a once-per-second timer is created and run from the mainloop until the program is terminated).

What happens above is rather subtle. The timer callback in the client launches once per second and performs the RPC method launch. The server, however, still has the 5 second delay for each method call in it. The successive launches go on without any responses for a while. The first response comes back at about 6 seconds from the starting of the client. At this point, the server already has four other outstanding method calls that it has not handled. Slowly the method calls are accumulating at the server end and it does not deal with them quickly enough to satisfy the client.

After about 30 seconds, the setValue1Completed callback is invoked, but the method call fails. This has managed to trigger the method call timeout mechanism. After this point, all the method calls that have accumulated into the server (into a message queue) fail in the client, since they all return late now, even if the server actually does handle them.

Once the client is terminated, the server is still happily continuing serving the requests, oblivious to the fact that there is no client to process the responses.

The above test demonstrates quite brutally that the services need to be designed properly, so that there is a clearly defined protocol what to do in case a method call is delayed. It is also advisable to design a notification protocol to tell clients that something has completed, instead of forcing them to time out. Using D-Bus signals is one way. However, signals must not be generated when nothing is listening to them. This can be done by only sending signals when a long operation finishes (assuming this has been documented as part of the service description).

One partial fix is for the client to track and make sure that only one method call to one service is outstanding at any given time. So, instead of just blindly launching the RPC methods, the client should defer from launching RPC methods if it has not yet received a response from the server (and the call has not timed out).

However, this fix is not complete, because the same problem manifests itself once there are multiple clients running in parallel and requesting the same methods. The proper fix is to make the server capable of serving multiple requests in parallel.

Sometimes the interface XML is missing, so the dbus-bindings-tool cannot be run to generate the stub code. The GLib wrappers are generic enough to enable building own method calls, when necessary.

Starting with some known generated stub code to see which parts can possibly be reused with some modifications is usually the easiest solution. This is shown in the last step of this example, in order to make a version of the asynchronous client that works without the stub generator.

What is notable in the code snippet above is that the _async method creates a temporary small structure that holds the pointer to the callback function and a copy of the userdata pointer. This small structure is then passed to dbus_g_proxy_begin_call, along with the address of the generated callback wrapper function (org_maemo_Value_setvalue1_async_callback). The GLib async launcher also takes a function pointer to a function to be used when the supplied "user-data" (in this case, the small structure) needs to be disposed of after the call. Because it uses g_new to allocate the small structure, it passes g_free as the freeing function. Next comes the argument specification for the method call, which obeys the same rules as the LibOSSO ones.

On RPC completion, the generated callback is invoked and it gets the real callback function pointer and the userdata as its "user-data" parameter. Firs it collects the exit code for the call with dbus_g_proxy_end_call, unpacks the data and invokes the real callback. After returning, the GLib wrappers (which called the generated callback) calls g_free to release the small structure, and the whole RPC launch ends.

The next step is to re-implement pretty much the same logic, but also dispose of the small structure because the callback is implemented directly, not as a wrapper-callback (it also omits the need for one memory allocation and one free).

/**
* This function is called when the async setvalue1 either
* completes, timeouts or fails (same as before). The main difference in
* using GLib/D-Bus wrappers is that we need to "collect" the return
* value (or error). This is done with the _end_call function.
*
* Note that all callbacks that are to be registered for RPC async
* notifications using dbus_g_proxy_begin_call must follow the
* following prototype: DBusGProxyCallNotify .
*/staticvoid setValue1Completed(DBusGProxy* proxy,
DBusGProxyCall* call,
gpointer userData){/* This will hold the GError object (if any). */
GError* error = NULL;
g_print(PROGNAME ":setValue1Completed\n");/* We next need to collect the results from the RPC call.
The function returns FALSE on errors (which we check), although
we could also check whether error-ptr is still NULL. */if(!dbus_g_proxy_end_call(proxy,/* The call that we're collecting. */
call,/* Where to store the error (if any). */&error,/* Next we list the GType codes for all
the arguments we expect back. In our
case there are none, so set to
invalid. */
G_TYPE_INVALID)){/* Some error occurred while collecting the result. */
g_printerr(PROGNAME " ERROR: %s\n", error->message);
g_error_free(error);}else{
g_print(PROGNAME " SUCCESS\n");}}

The generated stub code is no longer needed, so the dependency rules for the stubless GLib version are different: glib-dbus-async/Makefile

Since the example program logic has not changed from the previous version, testing client-glib is not presented here (it can of course be tested if so desired, because the source code contains the fully working program). This version of the client also launches the method calls without waiting for previous method calls to complete.