- CUDA: Leverage NVIDIA GPUs for accelerated inference. - TensorRT: Optimize inference on NVIDIA GPUs using TensorRT. - Android-Qualcomm-QNN: Utilize Qualcomm AI Engine Direct SDK (QNN) on Android ...