Skip to content

[QNN EP] Reverting a recent logging change for QNN GPU only, #24444

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

johnpaultaken
Copy link
Contributor

Description

Mapping ORT verbose logging back to QnnGpu Debug logging.

Motivation and Context

Why is this change required? What problem does it solve?
As of now this change is required for the QnnGpu backend to run models correctly.
It's necessity is mentioned in this commit
b4b5a79
It is temporarily reverting this commit. for the GPU case only, due to loss of functionality
9d45b9a

…it broke QNN GPU execution.
@HectorSVC HectorSVC added the ep:QNN issues related to QNN exeution provider label Apr 16, 2025
@HectorSVC
Copy link
Contributor

/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI

@HectorSVC
Copy link
Contributor

/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline,Linux OpenVINO CI Pipeline

Copy link

Azure Pipelines successfully started running 3 pipeline(s).

Copy link

Azure Pipelines successfully started running 6 pipeline(s).

@HectorSVC HectorSVC merged commit ef832b9 into microsoft:main Apr 17, 2025
68 of 76 checks passed
ashrit-ms pushed a commit that referenced this pull request Apr 24, 2025
### Description
Mapping ORT verbose logging back to QnnGpu Debug logging.

### Motivation and Context
Why is this change required? What problem does it solve?
As of now this change is required for the QnnGpu backend to run models correctly.
It's necessity is mentioned in this commit

b4b5a79
It is temporarily reverting this commit. for the GPU case only, due to
loss of functionality

9d45b9a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:QNN issues related to QNN exeution provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants