Bump vite from 6.2.4 to 6.2.5 in /js/web/test/e2e/exports/testcases/vite-default #322
linux_cuda_ci.yml
on: pull_request
Build Linux CUDA x64 Release
/
build_test_pipeline
35m 11s
Test Linux CUDA x64 Release
33m 10s
Annotations
7 warnings
|
Build Linux CUDA x64 Release / build_test_pipeline
Error trying to execute nvidia-smi: Unable to locate executable file: nvidia-smi. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.. Assuming no GPU.
|
|
Build Linux CUDA x64 Release / build_test_pipeline
stderr: + PATH=/opt/python/cp310-cp310/bin:/usr/local/dotnet:/usr/lib/jvm/msopenjdk-17/bin:/opt/rh/gcc-toolset-12/root/usr/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+ python3 tools/ci_build/build.py --build_dir build/Release --config Release --cmake_generator Ninja --skip_submodule_sync --build_shared_lib --parallel --use_vcpkg --use_vcpkg_ms_internal_asset_cache --enable_onnx_tests --use_cuda --use_binskim_compliant_compile_flags --build_wheel --cuda_version=12.2 --cuda_home=/usr/local/cuda-12.2 --cudnn_home=/usr/local/cuda-12.2 --enable_cuda_profiling --build_java --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=80 onnxruntime_BUILD_UNIT_TESTS=ON onnxruntime_ENABLE_CUDA_EP_INTERNAL_TESTS=ON --build
2025-04-04 16:07:52,336 build [DEBUG] - Command line arguments:
--build_dir build/Release --config Release --cmake_generator Ninja --skip_submodule_sync --build_shared_lib --parallel --use_vcpkg --use_vcpkg_ms_internal_asset_cache --enable_onnx_tests --use_cuda --use_binskim_compliant_compile_flags --build_wheel --cuda_version=12.2 --cuda_home=/usr/local/cuda-12.2 --cudnn_home=/usr/local/cuda-12.2 --enable_cuda_profiling --build_java --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=80 onnxruntime_BUILD_UNIT_TESTS=ON onnxruntime_ENABLE_CUDA_EP_INTERNAL_TESTS=ON --build
2025-04-04 16:07:52,341 build [INFO] - Build started
2025-04-04 16:07:52,341 build [INFO] - Building targets for Release configuration
2025-04-04 16:07:52,341 build [INFO] - /usr/bin/cmake --build build/Release/Release --config Release -- -j16
2025-04-04 16:29:27,117 build [INFO] - /opt/python/cp310-cp310/bin/python3 /onnxruntime_src/setup.py bdist_wheel --nightly_build --wheel_name_suffix=gpu --cuda_version=12.2
/opt/python/cp310-cp310/lib/python3.10/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!
********************************************************************************
Please avoid running ``setup.py`` directly.
Instead, use pypa/build, pypa/installer or other
standards-based tools.
See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
********************************************************************************
!!
self.initialize_options()
DEBUG:auditwheel.musllinux:musl libc not detected
DEBUG:auditwheel.libc:Falling back to GNU libc
INFO:auditwheel.main_repair:Repairing onnxruntime_gpu-1.22.0.dev20250404-cp310-cp310-linux_x86_64.whl
DEBUG:auditwheel.wheel_abi:processing: onnxruntime/capi/libonnxruntime.so.1.22.0
DEBUG:auditwheel.musllinux:musl libc not detected
DEBUG:auditwheel.libc:Falling back to GNU libc
DEBUG:auditwheel.lddtree:parse_ld_so_conf(//etc/ld.so.conf)
DEBUG:auditwheel.lddtree: glob: //etc/ld.so.conf.d/*.conf
DEBUG:auditwheel.lddtree: parse_ld_so_conf(//etc/ld.so.conf.d/00-manylinux.conf)
DEBUG:auditwheel.lddtree: parse_ld_so_conf(//etc/ld.so.conf.d/gds-12-2.conf)
DEBUG:auditwheel.lddtree: parse_ld_so_conf(//etc/ld.so.conf.d/nvidia.conf)
DEBUG:auditwheel.lddtree: parse_ld_so_conf(//etc/ld.so.conf.d/000_cuda.conf)
DEBUG:auditwheel.lddtree: parse_ld_so_conf(//etc/ld.so.conf.d/988_cuda-12.conf)
DEBUG:auditwheel.lddtree:linker ldpaths: {'conf': ['/usr/local/lib', '/usr/local/cuda-12.2/targets/x86_64-linux/lib', '/usr/local/cuda/targets/x86_64-linux/lib', '/usr/local/cuda-12/targets/x86_64-linux/lib', '/lib', '/lib64/', '/usr/lib', '/usr/lib64'], 'env': ['/opt/rh/gcc-toolset-12/root/usr/lib64', '/opt/rh/gcc-toolset-12/root/usr/lib', '/usr/local/lib64'], 'interp': []}
DEBUG:auditwheel.lddtree:lddtree(onnxruntime/capi/libonnxruntime.so.1.22.0)
DEBUG:auditwheel.lddtree: ldpaths[rpath] = []
DEBUG:auditwheel.lddtree: ldpaths[runpath] = ['/tmp/tmp8_tbse4w/onnxruntime/capi']
DEBUG:auditwheel.lddtree:lddtree(/lib64/libdl-2.28.so)
DEBUG:auditwheel.lddtree:lddtree(/lib64/libc-2.28.so)
DEBUG:auditwheel.lddtree:lddtree(/lib64/ld-2.28.so)
DEBUG:auditwheel.lddtree:lddtree(/lib64/librt-2.28.so)
DEBUG:auditwheel.lddtree:lddtree(/lib64/lib
|
|
Build Linux CUDA x64 Release / build_test_pipeline
stderr: WARNING! Your password will be stored unencrypted in /home/cloudtest/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores
|
|
Build Linux CUDA x64 Release / build_test_pipeline
Error trying to execute nvidia-smi: Unable to locate executable file: nvidia-smi. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.. Assuming no GPU.
|
|
Build Linux CUDA x64 Release / build_test_pipeline
stderr: + PATH=/opt/python/cp310-cp310/bin:/usr/local/dotnet:/usr/lib/jvm/msopenjdk-17/bin:/opt/rh/gcc-toolset-12/root/usr/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+ python3 tools/ci_build/build.py --build_dir build/Release --config Release --cmake_generator Ninja --skip_submodule_sync --build_shared_lib --parallel --use_vcpkg --use_vcpkg_ms_internal_asset_cache --enable_onnx_tests --use_cuda --use_binskim_compliant_compile_flags --build_wheel --cuda_version=12.2 --cuda_home=/usr/local/cuda-12.2 --cudnn_home=/usr/local/cuda-12.2 --enable_cuda_profiling --build_java --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=80 onnxruntime_BUILD_UNIT_TESTS=ON onnxruntime_ENABLE_CUDA_EP_INTERNAL_TESTS=ON --update
2025-04-04 16:06:52,186 build [DEBUG] - Command line arguments:
--build_dir build/Release --config Release --cmake_generator Ninja --skip_submodule_sync --build_shared_lib --parallel --use_vcpkg --use_vcpkg_ms_internal_asset_cache --enable_onnx_tests --use_cuda --use_binskim_compliant_compile_flags --build_wheel --cuda_version=12.2 --cuda_home=/usr/local/cuda-12.2 --cudnn_home=/usr/local/cuda-12.2 --enable_cuda_profiling --build_java --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=80 onnxruntime_BUILD_UNIT_TESTS=ON onnxruntime_ENABLE_CUDA_EP_INTERNAL_TESTS=ON --update
2025-04-04 16:06:52,191 build [INFO] - Build started
2025-04-04 16:06:52,191 build [INFO] - Generating CMake build tree
2025-04-04 16:06:52,205 build [INFO] - /usr/bin/cmake /onnxruntime_src/cmake -Donnxruntime_RUN_ONNX_TESTS=ON -Donnxruntime_GENERATE_TEST_REPORTS=ON -DPython_EXECUTABLE=/opt/python/cp310-cp310/bin/python3 -Donnxruntime_USE_VCPKG=ON -Donnxruntime_USE_MIMALLOC=OFF -Donnxruntime_ENABLE_PYTHON=ON -Donnxruntime_BUILD_CSHARP=OFF -Donnxruntime_BUILD_JAVA=ON -Donnxruntime_BUILD_NODEJS=OFF -Donnxruntime_BUILD_OBJC=OFF -Donnxruntime_BUILD_SHARED_LIB=ON -Donnxruntime_BUILD_APPLE_FRAMEWORK=OFF -Donnxruntime_USE_DNNL=OFF -Donnxruntime_USE_NNAPI_BUILTIN=OFF -Donnxruntime_USE_VSINPU=OFF -Donnxruntime_USE_RKNPU=OFF -Donnxruntime_ENABLE_MICROSOFT_INTERNAL=OFF -Donnxruntime_USE_VITISAI=OFF -Donnxruntime_USE_TENSORRT=OFF -Donnxruntime_USE_TENSORRT_BUILTIN_PARSER=ON -Donnxruntime_USE_TENSORRT_INTERFACE=OFF -Donnxruntime_USE_CUDA_INTERFACE=OFF -Donnxruntime_USE_OPENVINO_INTERFACE=OFF -Donnxruntime_USE_VITISAI_INTERFACE=OFF -Donnxruntime_USE_QNN_INTERFACE=OFF -Donnxruntime_USE_MIGRAPHX=OFF -Donnxruntime_DISABLE_CONTRIB_OPS=OFF -Donnxruntime_DISABLE_ML_OPS=OFF -Donnxruntime_DISABLE_RTTI=OFF -Donnxruntime_DISABLE_EXCEPTIONS=OFF -Donnxruntime_MINIMAL_BUILD=OFF -Donnxruntime_EXTENDED_MINIMAL_BUILD=OFF -Donnxruntime_MINIMAL_BUILD_CUSTOM_OPS=OFF -Donnxruntime_REDUCED_OPS_BUILD=OFF -Donnxruntime_USE_DML=OFF -Donnxruntime_USE_WINML=OFF -Donnxruntime_BUILD_MS_EXPERIMENTAL_OPS=OFF -Donnxruntime_USE_TELEMETRY=OFF -Donnxruntime_ENABLE_LTO=OFF -Donnxruntime_USE_ACL=OFF -Donnxruntime_USE_ARMNN=OFF -Donnxruntime_ARMNN_RELU_USE_CPU=ON -Donnxruntime_ARMNN_BN_USE_CPU=ON -Donnxruntime_USE_JSEP=OFF -Donnxruntime_USE_WEBGPU=OFF -Donnxruntime_ENABLE_PIX_FOR_WEBGPU_EP=OFF -Donnxruntime_USE_EXTERNAL_DAWN=OFF -Donnxruntime_ENABLE_NVTX_PROFILE=OFF -Donnxruntime_ENABLE_TRAINING=OFF -Donnxruntime_ENABLE_TRAINING_OPS=OFF -Donnxruntime_ENABLE_TRAINING_APIS=OFF -Donnxruntime_ENABLE_CPU_FP16_OPS=OFF -Donnxruntime_USE_NCCL=OFF -Donnxruntime_BUILD_BENCHMARKS=OFF -Donnxruntime_USE_ROCM=OFF -Donnxruntime_GCOV_COVERAGE=OFF -Donnxruntime_USE_MPI=OFF -Donnxruntime_ENABLE_MEMORY_PROFILE=OFF -Donnxruntime_ENABLE_CUDA_LINE_NUMBER_INFO=OFF -Donnxruntime_USE_CUDA_NHWC_OPS=ON -Donnxruntime_BUILD_WEBASSEMBLY_STATIC_LIB=OFF -Donnxruntime_ENABLE_WEBASSEMBLY_EXCEPTION_CATCHING=ON -Donnxruntime_ENABLE_WEBASSEMBLY_API_EXCEPTION_CATCHING=OFF -Donnxruntime_ENABLE_WEBASSEMBLY_EXCEPTION_THROWING=ON -Donnxruntime_WEBASSEMBLY_RUN_TESTS_IN_BROWSER=OFF -Donnxruntime_ENABLE_WEBASSEMBLY_THREADS=OFF -Donnxruntime_ENABLE_WEBASSEMBLY_MEMORY64=OFF -Donnxruntime_ENABLE_WEBASSEMBLY_DEBUG_INFO=OFF -Donnxruntime_ENABLE_WEBASSEMBLY_PROFILING=OFF -Donnxruntime_ENABLE_LAZY
|
|
Test Linux CUDA x64 Release
stderr: WARNING! Your password will be stored unencrypted in /home/cloudtest/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores
|
|
Test Linux CUDA x64 Release
stderr: + PATH=/opt/python/cp310-cp310/bin:/usr/local/dotnet:/usr/lib/jvm/msopenjdk-17/bin:/opt/rh/gcc-toolset-12/root/usr/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+ python3 tools/ci_build/build.py --build_dir build/Release --config Release --cmake_generator Ninja --skip_submodule_sync --build_shared_lib --parallel --use_vcpkg --use_vcpkg_ms_internal_asset_cache --enable_onnx_tests --use_cuda --use_binskim_compliant_compile_flags --cuda_version=12.2 --cuda_home=/usr/local/cuda-12.2 --cudnn_home=/usr/local/cuda-12.2 --enable_cuda_profiling --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=80 onnxruntime_BUILD_UNIT_TESTS=ON onnxruntime_ENABLE_CUDA_EP_INTERNAL_TESTS=ON --test
2025-04-04 16:47:44,098 build [DEBUG] - Command line arguments:
--build_dir build/Release --config Release --cmake_generator Ninja --skip_submodule_sync --build_shared_lib --parallel --use_vcpkg --use_vcpkg_ms_internal_asset_cache --enable_onnx_tests --use_cuda --use_binskim_compliant_compile_flags --cuda_version=12.2 --cuda_home=/usr/local/cuda-12.2 --cudnn_home=/usr/local/cuda-12.2 --enable_cuda_profiling --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=80 onnxruntime_BUILD_UNIT_TESTS=ON onnxruntime_ENABLE_CUDA_EP_INTERNAL_TESTS=ON --test
2025-04-04 16:47:44,105 build [INFO] - Build started
2025-04-04 16:47:44,105 build [DEBUG] - create symlink /data/models -> build/Release/models
2025-04-04 16:47:44,105 build [INFO] - Running tests for Release configuration
2025-04-04 16:47:44,105 build [INFO] - /usr/bin/ctest --build-config Release --verbose --timeout 10800
2025-04-04 17:07:56,839 build [INFO] - Build complete
|
Artifacts
Produced during runtime
| Name | Size | Digest | |
|---|---|---|---|
|
build-output-x64-Release
Expired
|
1.74 GB |
sha256:d429a3b665f4d4d56743544b12ba13da5d3ea7663a5f536a9b10af97adde0622
|
|