Hi,
I am currently working with an Onnx model and would like to get its performance insights using different execution providers.
I checked the Onnx Runtime provided with CentOS7 setup. It is the default one containing only CPU execution provider (MLAS).
Is there a way to extend it for other providers like - Intel OpenVINO, Intel oneDNN, CUDA and TensorRT?
Thanks.