DLR is a compact, common runtime for deep learning models and decision tree models compiled by AWS SageMaker Neo, TVM, or Treelite. DLR uses the TVM runtime, Treelite runtime, NVIDIA TensorRT™, and can include other hardware-specific runtimes. DLR provides unified Python/C++ APIs for loading and running compiled models on various devices. DLR currently supports platforms from Intel, NVIDIA, and ARM, with support for Xilinx, Cadence, and Qualcomm coming soon.
On X86_64 targets running Linux, you can install latest release of DLR package via
pip install dlr
For installation of DLR on non-x86 edge devices, or building DLR from source, please refer to Installing DLR
For instructions on using DLR, please refer to Amazon SageMaker Neo – Train Your Machine Learning Models Once, Run Them Anywhere
Also check out the API documentation
We prepared several examples demonstrating how to use DLR API on different platforms
- Neo AI DLR image classification Android example application
- DL Model compiler for Android
- DL Model compiler for AWS EC2 instances
This library is licensed under the Apache License Version 2.0.