-
Notifications
You must be signed in to change notification settings - Fork 3.2k
ORT aborts when loading the attached model #24473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
|
|
Check ValueInfoProto name? |
Hi @skottmckay! I see that you are the last author for |
Fix optimizer tests by turning on onnx shape inference. This is needed to elimininate an If node that is causing ORT to crash (microsoft/onnxruntime#24473). I removed the argument because shape inference is default.
I found the issue. onnxruntime/onnxruntime/core/optimizer/conv_activation_fusion.cc Lines 28 to 34 in 8602d04
If Working on a fix. |
@edgchen1 Thanks! Scott helped with the analysis and pointed to the same issue. I wasn't able to make any code change yet so please feel free to work on it! For completeness I am reposting from Scott here:
|
### Description <!-- Describe your changes. --> An additional check for non-constant inputs was added to ConvActivationFusion in #20282. This was to avoid fusing an Add in a Conv+Add+Relu that has another non-constant input. https://github.com/microsoft/onnxruntime/blob/6c8cb6a6d1993f84fcf4008f468a071c0b73aad3/onnxruntime/core/optimizer/conv_activation_fusion.cc#L26-L39 However, this check fails to account for implicit inputs and will read past the end of a node's explicit input defs if any implicit inputs are present. Moreover, this check is no longer necessary after #19470 removed Conv+Add+Relu fusion from ConvActivationFusion. This change removes the check and some other unused code. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> Fix #24473.
Describe the issue
resnet18.zip
When loading the model, inference session aborts without an informative error message.
To reproduce
yeilds
Urgency
No response
Platform
Linux
OS Version
ubuntu
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.21.0
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: