Session Resources

Session Summary

To use mobile GPU to accelerate deep learning inference on Arm platforms in device side, OpenCL support seems a proper and promising fit. NNVM is an open compiler for AI frameworks with graph IR implementation, and TVM is an open source end-to-end Tensor IR/DSL stack. NNVM together with TVM provides a flexible architecture to support different frameworks and backends. OpenCL is one of the supported backends by NNVM & TVM now, the latest status and some how-tos will be discussed in this session. —————————————————