-
Notifications
You must be signed in to change notification settings - Fork 24.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regression for search query in single node cluster #31877
Comments
Pinging @elastic/es-search-aggs |
@ctrlaltdel I see this as a feature rather than a bug. With that many shards on a single node, having this protection is important to keep things reasonable with many concurrent users. To me the problem is more the number of shards per node here. |
@jpountz thanks for the comment. My typical use-case here is an ELK stack running on a single machine and serving as a search engine for syslog messages. There's usually only a single query running at a given time which is coming from a kibana dashboard. In this case running on a 24 cores machine, typical query time is now about 5 times slower when running latest version of elasticsearch than it was with 5.5.3. Could you please elaborate on how changing the number of shards here would improve performance? AFAIK, since #25632 was merged, a single query can only run on 5 threads (so use maximum 5 cores) per node by default. The only way to improve this would be to actually increase the number of shards per index. Unfortunately there's now way to tell kibana to set the |
I agree it's a shame you can't use all your resources here. This usecase isn't typical in the way that you don't gain concurrency via multiple requests. Think of 5 requests hitting your server at the same time you will have 25 requests at most hitting your node which is a good protection. Yet, your case want to maximize resource utilization per node and it's a shame that kibana can't trigger it. I wonder if we should allow multi search to override this and then make kibana expose it? @jpountz WDYT? |
+1 |
Today `_msearch` doesn't allow modifying the `max_concurrent_shard_requests` per sub search request. This change adds support for setting this parameter on all sub-search requests in an `_msearch`. Relates to elastic#31877
I created a PR to expose this setting on |
@s1monw looks good, thanks a lot for taking care of this issue :) |
this has been integrated in kibana. I am closing this. |
Elasticsearch version (
bin/elasticsearch --version
): 6.3.1Plugins installed: []
JVM version (
java -version
): OpenJDK Runtime Environment (build 10.0.1+10-Ubuntu-3ubuntu1)OS version (
uname -a
if on a Unix-like system): Ubuntu 18.04Linux es-perf 4.13.0-25-generic #29-Ubuntu SMP Mon Jan 8 21:14:41 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:
A single search query running on a single node elasticsearch cluster is much slower with elasticsearch 6.3.1 than it was with 5.5.3 while it would be expected to take the same time.
An inefficient use of the CPU due to the limit of concurrent shard requests per search request (implemented in #25632) is the likely culprit.
Steps to reproduce:
The following steps were tested in a virtual machine with 8 cores and 16 GB of RAM.
populate.sh
(see below)match_all
search query while showing CPU usagemax_concurrent_shard_requests
to 16Summary:
Test scripts:
populate.sh
:query.sh
:The text was updated successfully, but these errors were encountered: