Skip to content

in cmake/CMakeList.txt all avx related option all set off, do we need do anything to use avx features? #11833

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
mmxuan18 opened this issue Jun 13, 2022 · 9 comments
Labels
core runtime issues related to core runtime

Comments

@mmxuan18
Copy link

mmxuan18 commented Jun 13, 2022

in cmake/CMakeList.txt all avx related option are set OFF as follow, and can't find any place to set to ON, do we need to care about these options? in my product env have very different machine types, which have different cpu arch, is onnxruntime auto select right avx type to use for acc?
image

i add a message command at the end of CMakeFile.txt to display CMAKE_CXX_FLAGS and only output "-ffunction-sections -fdata-sections -Wno-error=attributes -DCPUINFO_SUPPORTED" , which do'nt has any infomation about avx arch

@yuslepukhin yuslepukhin added the core runtime issues related to core runtime label Jun 13, 2022
@yuslepukhin
Copy link
Member

@chenfucn Mind taking a look?

@yufenglee
Copy link
Member

For the heaviest operators, like MatMul, Conv and etc., ORT automatically selects to run the code that fit the arch best.
Those options are for other operators. if you know the exact arch of your target hardware, you can try turning them on and build from source. Usually there is little impact.

@SuperLee188
Copy link

For the heaviest operators, like MatMul, Conv and etc., ORT automatically selects to run the code that fit the arch best. Those options are for other operators. if you know the exact arch of your target hardware, you can try turning them on and build from source. Usually there is little impact.

Can this be understood that for CPUs that support the AVX instruction set, OnnxRuntime will use AVX acceleration by default?

@chenfucn
Copy link
Contributor

Yes

@LY000001
Copy link

LY000001 commented Oct 7, 2023

For the heaviest operators, like MatMul, Conv and etc., ORT automatically selects to run the code that fit the arch best. Those options are for other operators. if you know the exact arch of your target hardware, you can try turning them on and build from source. Usually there is little impact.

I turned the avx, avx2, avx512 off, then compile the onnxruntime.But I find there are still avx instructions.Why?

@ogencoglu
Copy link

Can this be understood that for CPUs that support the AVX instruction set, OnnxRuntime will use AVX acceleration by default?

Yes

@chenfucn is it so that onnxruntime should be compiled specifically for AVX2 etc to benefit from them or simple pip install is enough?

@sikandermukaram
Copy link

Is onnxruntime using avx512 by default? If yes how to confirm this? like oprnvino has benchmark that shows which instruction set is being used for dfifferent layers/ops, is there something similar for onnxruntime? Also how to make sure onnxruntime doesn't use avx512 or use avx2?

@chenfucn
Copy link
Contributor

#11833 (comment)
at startup, mlas detect underlying platform, and based on instruction sets available, hookup the best available kernels.

hi @jywu-msft would you help with future mlas questions as I moved my focus to other areas? Thanks!

@sikandermukaram
Copy link

#11833 (comment) at startup, mlas detect underlying platform, and based on instruction sets available, hookup the best available kernels.

hi @jywu-msft would you help with future mlas questions as I moved my focus to other areas? Thanks!

So I tested a quantised model on an E2 instance which doesn't support avx512 vs N1 instance that has avx512 support. The scores for the model changes a lot. I don't want to move to E2 instance as I do development in N1 but deployment on E2, so is there a way for mlas to not use avx512 on supported cpu? and I still couldn't find a way to know which kernels it used (as done in openvino benchmark)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core runtime issues related to core runtime
Projects
None yet
Development

No branches or pull requests

8 participants