Threading Models

Methods marked as oneway do not block. For methods not marked as
oneway, a client's method call will block until the server has
completed execution or called a synchronous callback (whichever comes first).
Server method implementations may call at most one synchronous callback; extra
callback calls are discarded and logged as errors. If a method is supposed to
return values via callback and does not call its callback, this is logged as an
error and reported as a transport error to the client.

Threads in passthrough mode

In passthrough mode, most calls are synchronous. However, to preserve the
intended behavior that oneway calls do not block the client, a
thread is created for each process. For details, see the
HIDL overview.

Threads in binderized HALs

To serve incoming RPC calls (including asynchronous callbacks from HALs to
HAL users) and death notifications, a threadpool is associated with each process
that uses HIDL. If a single process implements multiple HIDL interfaces and/or
death notification handlers, its threadpool is shared between all of them. When
a process receives an incoming method call from a client, it picks a free thread
from the threadpool and executes the call on that thread. If no free thread is
available, it blocks until one is available.

If the server has only one thread, then calls into the server are completed
in order. A server with more than one thread may complete calls out of order
even if the client has only one thread. As oneway calls do not
block the client, multiple oneway calls may be processed
simultaneously or out of order by a server with more than one thread, and
oneway calls may be processed concurrently with a subsequent
blocking call.

Server threading model

Except for passthrough mode, server implementations of HIDL interfaces live
in a different process than the client and need one or more threads waiting for
incoming method calls. These threads are the server's threadpool; the server may
decide how many threads it wants running in its threadpool, and can use a
threadpool size of one to serialize all calls on its interfaces. If the server
has more than one thread in the threadpool, it can receive concurrent incoming
calls on any of its interfaces (in C++, this means that shared data must be
carefully locked).

Oneway calls into the same interface are serialized. If a multi-threaded
client calls method1 and method2 on interface
IFoo, and method3 on interface IBar,
method1 and method2 will always be serialized, but
method3 may run in parallel with method1 and
method2.

A single client thread of execution can cause concurrent execution on a
server with multiple threads in two ways:

oneway calls do not block. If a oneway call is
executed and then a non-oneway is called, the server may execute
the oneway call and the non-oneway call
simultaneously.

Server methods that pass data back with synchronous callbacks can unblock
the client as soon as the callback is called from the server.

For the second way, any code in the server function that executes after the
callback is called may execute concurrently, with the server handling subsequent
calls from the client. This includes code in the server function and automatic
destructors that execute at the end of the function. If the server has more than
one thread in its threadpool, concurrency issues arise even if calls are coming
in from only one single client thread. (If any HAL served by a process needs
multiple threads, all HALs will have multiple threads because the threadpool is
shared per-process.)

As soon as the server calls the provided callback, the transport can call the
implemented callback on the client and unblock the client. The client proceeds
in parallel with whatever the server implementation does after it calls the
callback (which may include running destructors). Code in the server function
after the callback is no longer blocking the client (as long as the server
threadpool has enough threads to handle incoming calls), but may be executed
concurrently with future calls from the client (unless the server threadpool has
only one thread).

In addition to synchronous callbacks, oneway calls from a
single-threadedclient may be handled concurrently by a server with multiple
threads in its threadpool, but only if those oneway calls are
executed on different interfaces. oneway calls on the same
interface are always serialized.

Note: We strongly encourage server functions to
return as soon as they have called the callback function.

For example (in C++):

Return<void> someMethod(someMethod_cb _cb) {
// Do some processing, then call callback with return data
hidl_vec<uint32_t> vec = ...
_cb(vec);
// At this point, the client's callback will be called,
// and the client will resume execution.
...
return Void(); // is basically a no-op
};

Client threading model

The threading model on the client differs between non-blocking calls
(functions that are marked with the oneway keyword) and blocking
calls (functions that do not have the oneway keyword specified).

Blocking calls

For blocking calls, the client blocks until one of the following happens:

Transport error occurs; the Return object contains an error
state that can be retrieved with Return::isOk().

Server implementation calls the callback (if there was one).

Server implementation returns a value (if there was no callback parameter).

In case of success, the callback function the client passes as an argument is
always called by the server before the function itself returns. The callback is
executed on the same thread that the function call is made on, so implementers
must be careful with holding locks during function calls (and avoid them
altogether when possible). A function without a generates statement
or a oneway keyword is still blocking; the client blocks until the
server returns a Return<void> object.

Oneway calls

When a function is marked oneway, the client returns immediately
and does not wait for the server to complete its function call invocation.