You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i want to analysis the memory cost in model infer, but when i set enable_profiling=true, there is nothing about memory detail in profile json. do i need build onnxruntime from source with onnxruntime_ENABLE_MEMORY_PROFILE?
i build onnxruntime from source with args ./build.sh --use_cuda --cudnn_home /usr/local/cuda --cuda_home /usr/local/cuda --enable_nvtx_profile --enable_memory_profile --allow_running_as_root --build_wheel --cmake_extra_defines CMAKE_POLICY_VERSION_MINIMUM=3.5
and there is no error in build process, but seg fault happened in some pytorch test. and there is no whl file in build directory. how can i install python interface of onnxruntime after build?
Describe the issue
i want to analysis the memory cost in model infer, but when i set
enable_profiling=true
, there is nothing about memory detail in profile json. do i need build onnxruntime from source withonnxruntime_ENABLE_MEMORY_PROFILE
?To reproduce
Urgency
No response
Platform
Linux
OS Version
ubuntu 20.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.18.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
cuda 12.4
The text was updated successfully, but these errors were encountered: