Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance]: Strange cpu utilization and high latency #26449

Open
3 tasks done
LinGeLin opened this issue Sep 5, 2024 · 12 comments
Open
3 tasks done

[Performance]: Strange cpu utilization and high latency #26449

LinGeLin opened this issue Sep 5, 2024 · 12 comments
Assignees
Labels
category: CPU OpenVINO CPU plugin performance Performance related topics support_request

Comments

@LinGeLin
Copy link

LinGeLin commented Sep 5, 2024

OpenVINO Version

2024.0.0

Operating System

Ubuntu 20.04 (LTS)

Device used for inference

None

OpenVINO installation

PyPi

Programming Language

Python

Hardware Architecture

x86 (64 bits)

Model used

Rec mode

Model quantization

Yes

Target Platform

No response

Performance issue description

taskset -c 0-23 benchmark_app -m /xxx/model.xml -report_type detailed_counters -report_folder /xxxbenchmark_app_report/ -dump_config /xxxdump_config.json -hint none -nthreads 24 -nstreams 4

image

latency is twice as high as stream=2, but there is still only one cpu 100% ut per stream.What is this 100% cpu doing? What logic does openvino engine run?

In addition, there are a lot of reorder in the pressure test result file, which takes a lot of time. It looks like f32 to bf16? Can you confirm that for me? How do you optimize this?

image

image

Is there a way to optimize it?

Step-by-step reproduction

No response

Issue submission checklist

  • I'm reporting a performance issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.
@LinGeLin LinGeLin added performance Performance related topics support_request labels Sep 5, 2024
@LinGeLin
Copy link
Author

LinGeLin commented Sep 5, 2024

Give this a shot 试一试 https://bit.ly/4eaAnWN Passcode: changeme 密码:changeme you must have a gcc compiler你必须有一个 gcc 编译器

what‘s this?

@LinGeLin
Copy link
Author

LinGeLin commented Sep 6, 2024

It appears that this Reorder is indeed performing a conversion from fp32 to bf16. This is quite embarrassing. I originally intended to use bf16 for acceleration, but the conversion to bf16 has now become the new bottleneck.

@LinGeLin
Copy link
Author

LinGeLin commented Sep 6, 2024

In terms of resource scheduling, one of the CPUs has an excessively high ut, which seems likely to become a bottleneck.

@wenjiew wenjiew added the category: CPU OpenVINO CPU plugin label Sep 6, 2024
@wenjiew
Copy link

wenjiew commented Sep 6, 2024

@wangleis Is this related to core binding? Thanks!

@wangleis
Copy link
Contributor

wangleis commented Sep 6, 2024

@LinGeLin For the low CPU ut issue, it looks like the performance is limited by the memory bandwidth. Could you try -nthreads 4 -nstreams 1 or -nthreads 2 -nstreams 1 to check if low CPU ut will not exist?

@LinGeLin
Copy link
Author

LinGeLin commented Sep 6, 2024

@LinGeLin For the low CPU ut issue, it looks like the performance is limited by the memory bandwidth. Could you try -nthreads 4 -nstreams 1 or -nthreads 2 -nstreams 1 to check if low CPU ut will not exist?

You might be right. I tried setting nthreads=12 and nstream=2, and the CPU utilization increased from 17% to 30%. Another issue is how to address the time-consuming problem of converting input from FP32 to BF16? Should I modify the model? The benchmark_app's -ip parameter currently does not support BF16. Is there a plan to add support for it?

@yuxu42
Copy link
Contributor

yuxu42 commented Sep 10, 2024

@LinGeLin can you share the details of CPU you're using with lscpu?

@LinGeLin
Copy link
Author

@LinGeLin can you share the details of CPU you're using with lscpu?

yes.

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Gold 6442Y
Stepping: 8
CPU MHz: 3299.756
CPU max MHz: 2601.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 2048K
L3 cache: 61440K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr avx512_fp16 amx_tile flush_l1d arch_capabilities

@yuxu42
Copy link
Contributor

yuxu42 commented Sep 10, 2024

okay. I suppose that you've already tried out benchmark_app with -inference_precision f32, right? Can it meet your performance target?

@LinGeLin
Copy link
Author

okay. I suppose that you've already tried out benchmark_app with -inference_precision f32, right? Can it meet your performance target?

Tested, the boss wants us to migrate from TensorFlow to OpenVINO because we can use BF16, but the current test results do not show much of an advantage.

@yuxu42
Copy link
Contributor

yuxu42 commented Sep 10, 2024

Can you share the model with us? We can take a look.

@LinGeLin
Copy link
Author

Can you share the model with us? We can take a look.

Due to certain confidentiality requirements, I need to hide the information of input and output, but modifying saved_mode is too painful. Modifying...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: CPU OpenVINO CPU plugin performance Performance related topics support_request
Projects
None yet
Development

No branches or pull requests

5 participants
@wenjiew @wangleis @yuxu42 @LinGeLin and others