Public types

TfLiteDelegatePtr

Public static attributes

kTensorsCapacityHeadroom

The capacity headroom of tensors_ vector before calling ops' prepare and invoke function.

In these functions, it's guaranteed allocating up to kTensorsCapacityHeadroom more tensors won't invalidate pointers to existing tensors.

kTensorsReservedCapacity

constexpr int kTensorsReservedCapacity = 128

Public functions

AllocateTensors

TfLiteStatus AllocateTensors()

Update allocations for all tensors.

This will redim dependent tensors using the input tensor dimensionality as given. This is relatively expensive. If you know that your sizes are not changing, you need not call this. Returns status of success or failure.

EnsureTensorDataIsReadable

TfLiteStatus EnsureTensorDataIsReadable(
int tensor_index
)

Ensure the data in tensor.data is readable.

In case delegate is used, it might require to copy the data from delegate buffer to raw memory. WARNING: This is an experimental API and subject to change.

Interpreter

All errors associated with reading and processing this model will be forwarded to the error_reporter object. Note, if error_reporter is nullptr, then a default StderrReporter is used. Ownership of 'error_reporter' remains with the caller.

OpProfilingString

Retrieve an operator's description of its work, for profiling purposes.

ResetVariableTensors

TfLiteStatus ResetVariableTensors()

Reset all variable tensors to the default value.

If a variable tensor doesn't have a buffer, reset it to zero. TODO(b/115961645): Implement - If a variable tensor has a buffer, reset it to the value of the buffer. WARNING: This is an experimental API and subject to change.

ResizeInputTensor

Note, this is only acceptable for tensor indices that are inputs or variables. Returns status of failure or success. TODO(aselle): Consider implementing ArraySlice equivalent to make this more adept at accepting data without an extra copy. Use absl::ArraySlice if our partners determine that dependency is acceptable.

SetAllowBufferHandleOutput

void SetAllowBufferHandleOutput(
bool allow_buffer_handle_output
)

Set if buffer handle output is allowed.

When using hardware delegation, Interpreter will make the data of output tensors available in tensor->data by default. If the application can consume the buffer handle directly (e.g. reading output from OpenGL texture), it can set this flag to false, so Interpreter won't copy the data from buffer handle to CPU memory. WARNING: This is an experimental API and subject to change.

SetAllowFp16PrecisionForFp32

void SetAllowFp16PrecisionForFp32(
bool allow
)

Allow float16 precision for FP32 calculation when possible.

default: not allow. WARNING: This is an experimental API and subject to change.

SetBufferHandle

Set the buffer handle to a tensor that's not being written by a delegate. For example, feeding an OpenGL texture as the input of the inference graph.

Set the buffer handle to a tensor that uses the same delegate. For example, set an OpenGL texture as the output of inference, while the node which produces output is an OpenGL delegate node. WARNING: This is an experimental API and subject to change.

SetCancellationFunction

Sets the cancellation function pointer in order to cancel a request in the middle of a call to Invoke().

The interpreter queries this function during inference, between op invocations; when it returns true, the interpreter will abort execution and return kTfLiteError. The data parameter contains any data used by the cancellation function, and if non-null, remains owned by the caller. WARNING: This is an experimental API and subject to change.