The PlaidML API provides an interface for a frontend like Keras or ONNX to request the construction and execution of Tile functions. The PlaidML API is used both for generating Tile code and also to request compilation and executation of that Tile code; the later also requires information on where the input and output data is to be located and on the hardware to be used.

Typically, we call PlaidML code through a frontend like Keras or ONNX, but we can also call the PlaidML API functions directly. We’ll do so here to serve as an illustration of how the API works, wrapping the Tile code with Python code that defines some fake data and which tells PlaidML necessary information about the data and hardware:

The execution model is based on the idea of a data dependency graph. In this trivial example, our graph is correspondingly trivial, but we can see all of the components in action.

A Tensor represents a multidimensional array, combining a memory buffer and a Shape describing the number and extent of its dimensions. The contents of the tensor will be moved wherever they are needed, so the buffer may be located in system memory or on the GPU.

In order to read or write the contents of a tensor, you must mmap it into system memory. You can mmap_discard if you don’t care about the existing contents of the buffer, and simply want a writable view you can populate, or you can mmap_current to preserve the contents of the tensor, as would be appropriate when you want to use the function’s output. In either case, the mmap function provides a plaidml._View object.

When populating our input tensors, t and o, we use mmap_discard because these freshly-allocated buffers have no data. After we’ve finished writing values into the view, we indicate that the modified data is complete by calling writeback.

The view object serves as a lock, and it is this lock which allows asynchronous, possibly multithreaded function execution to be synchronized. When a function runs, it must first mmap in its input and output buffers, and the mmap operation will block until the current view has been released. In this example we are using the view as a context manager, to ensure that it will be closed after we are done populating each tensor.

The call to plaidml.run begins by creating a Function for the given source code. Next, it creates a Invoker which will execute that function in the given context. Each input and output tensor is bound to the Invoker instance. Finally, plaidml.run calls the invoker’s invoke() method, returning a Invocation, and this invocation is what actually schedules execution of the bound function.

Inside the PlaidML runtime, the scheduler passes the Invocation instance off to a hardware support module, which compiles the Tile function appropriately for the target device. The PlaidML runtime then launches the executable code and returns, leaving the function to run asynchronously whenever its dependencies become available.

This process works reasonably well provided you can write your entire program as a single Tile function. It is possible to read out the result of one Tile function and pass it to another via the mmap operations, but this will hurt performance. The best way to compose multiple Tile functions is by using the Operation and Value objects provided in plaidml.tile; see Building a Frontend for details. This results in an Invocation and output(s) that can be read via mmap_current in the same way as a single Tile function.