Skip to content

onnxruntime InferenceSession error #24441

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ruajgmeiogaui opened this issue Apr 16, 2025 · 1 comment
Open

onnxruntime InferenceSession error #24441

ruajgmeiogaui opened this issue Apr 16, 2025 · 1 comment

Comments

@ruajgmeiogaui
Copy link

Describe the issue

I have loaded a mdoel.pt and transfer it to onnx.But when I try to run ort.InferenceSession,such error occurs:onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for EyeLike(9) node with name '/blocks.0/intra_mossformer/EyeLike'.

To reproduce

My code is as follows:
def inference(args):
device = torch.device('cuda') if args.use_cuda==1 else torch.device('cpu')
print(device)
print('creating model...')
model = network_wrapper(args).se_network
model.to(device)

print('loading model ...')
reload_for_eval(model, args.checkpoint_dir, args.use_cuda,args.debug)
model.eval()
example_input = torch.randn((1,2,161,201)).to(device)
onnx_path = os.path.join(args.checkpoint_dir, 'model.onnx')
torch.onnx.export(
    model,
    example_input,
    onnx_path,
    export_params=True,  # 导出模型参数
    opset_version=13,  # ONNX opset 版本
    do_constant_folding=True,  # 是否执行常量折叠优化
    input_names=["input"],  # 输入节点名称
    output_names=["output"],  # 输出节点名称
    dynamic_axes={  # 动态轴支持
        "input": {0: "batch_size"},  # 允许输入的 batch_size 动态变化
        "output": {0: "batch_size"}
    }
)

The original code can be found in : https://github.com/modelscope/ClearerVoice-Studio/tree/main/train/speech_enhancement
the model.pt can be found in: https://huggingface.co/alibabasglab/MossFormerGAN_SE_16K
Thank for your reply

Urgency

No response

Platform

Linux

OS Version

VERSION="20.04.4 LTS (Focal Fossa)"

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

onnx 1.13.0 onnxruntime 1.13.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA Version: 12.2

@justinchuby
Copy link
Contributor

Please consider doing two things:

  1. Update onnx, onnxruntime and PyTorch to the latest versions
  2. For model export, consider testing with torch.onnx.export(..., dynamo=True, report=True) using the latest torch-nightly. Attach the generated report if there is an error. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants