Skip to content

[Build] ORT, DML, OpenVINO Python wheel build - "OpenVINOExecutionProvider doesn't support memcpy" #23824

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
virajwad opened this issue Feb 26, 2025 · 3 comments
Labels
build build issues; typically submitted using template ep:DML issues related to the DirectML execution provider ep:OpenVINO issues related to OpenVINO execution provider stale issues that have not been addressed in a while; categorized by a bot

Comments

@virajwad
Copy link
Contributor

virajwad commented Feb 26, 2025

Describe the issue

I built ONNX RT 1.19.0 w/ DirectML and OpenVINO 2024.3 Release together using this command:

.\build.bat --config RelWithDebInfo --parallel --use_openvino --use_dml --use_winml --enable_wcos --build_shared_lib --skip_tests --build_wheel

The build completes successfully, I take the wheel and pip install it within my virtual env, then run a python script I use to run inference sessions for some ONNX models. Running DirectML (GPU) looks like it is working perfectly fine. OpenVINO EP is also registered, it shows where I print "get_available_providers()".

print("Printing Available ONNX RT providers (EPs):") print(onnxruntime.get_available_providers())

But I am getting an error when trying to run OpenVINO (OV) EP.

Image

"Execution type OpenVINOExecutionProvider doesn't support memcpy"

In the traceback it is failing when I setup the Inference Session like so:

openvino_options = [{'device_type': 'GPU', 'cache_dir': 'cachedir'}, {}] exec_provider =['OpenVINOExecutionProvider', 'DmlExecutionProvider'] session0 = onnxruntime.InferenceSession("model_0.onnx", sess_options=options, providers=exec_provider, provider_options=openvino_options)

I also tried building a new ONNX RT version with new OV EP - I tried ONNX RT 1.20.0 (release), DML 1.15.2 , and OV 2024.5 and the same issue occurs - both DML and OV EP show as registered, DML works but OV EP gets a memcpy error.

Any ideas where the issue lies?

Also FYI I am using Python 3.11.9

These are the libraries I have installed that are related to ONNX RT or OV. The wheel I installed came up as the "onnxruntime-openvino" one

Image

Urgency

I would like to test with different EPs

Target platform

Windows

Build script

.\build.bat --config RelWithDebInfo --parallel --use_openvino --use_dml --use_winml --enable_wcos --build_shared_lib --skip_tests --build_wheel

Error / output

"Execution type OpenVINOExecutionProvider doesn't support memcpy"

Image

Visual Studio Version

Visual Studio 2022

GCC / Compiler Version

No response

@virajwad virajwad added the build build issues; typically submitted using template label Feb 26, 2025
@github-actions github-actions bot added ep:DML issues related to the DirectML execution provider ep:OpenVINO issues related to OpenVINO execution provider labels Feb 26, 2025
@virajwad
Copy link
Contributor Author

virajwad commented Feb 26, 2025

I was able to solve the issue. In ONNX RT, we can usually make InferenceSession objects and specify multiple EPs together in a priority order. For example, shown here:

Image

However it looks like this doesn't work specifically with OpenVINO EP and DML EP. We need to create separate inference session objects

So I changed my code to something like so:

openvino_options = [{'device_type': 'GPU', 'cache_dir': 'cachedir'}] # Put DML and OV EP options separately... and it will work! exec_provider =['OpenVINOExecutionProvider'] session0 = onnxruntime.InferenceSession("model_0.onnx", sess_options=options, providers=exec_provider, provider_options=openvino_options)

OR

openvino_options = [{}] exec_provider =['DmlExecutionProvider'] session0 = onnxruntime.InferenceSession("model_0.onnx", sess_options=options, providers=exec_provider, provider_options=openvino_options)

Trying both options separately, I saw GPU utilization using OV EP or DML. So looks like it works if I do this way... I'm not quite sure why adding both EPs causes a memcpy issue though!

@skottmckay
Copy link
Contributor

May have been fixed by #22413

I believe we were overly restrictive in one of the checks.

Copy link
Contributor

github-actions bot commented Apr 3, 2025

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Apr 3, 2025
@virajwad virajwad closed this as completed Apr 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build build issues; typically submitted using template ep:DML issues related to the DirectML execution provider ep:OpenVINO issues related to OpenVINO execution provider stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

2 participants