Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose max_concurrent_shard_requests in _msearch #33016

Merged
merged 3 commits into from
Aug 22, 2018

Conversation

s1monw
Copy link
Contributor

@s1monw s1monw commented Aug 21, 2018

Today _msearch doesn't allow modifying the max_concurrent_shard_requests
per sub search request. This change adds support for setting this parameter on
all sub-search requests in an _msearch.

Relates to #31877

Today `_msearch` doesn't allow modifying the `max_concurrent_shard_requests`
per sub search request. This change adds support for setting this parameter on
all sub-search requests in an `_msearch`.

Relates to elastic#31877
@s1monw s1monw added >enhancement :Search/Search Search-related issues that do not fall into other categories v7.0.0 v6.5.0 labels Aug 21, 2018
@s1monw s1monw requested a review from jpountz August 21, 2018 07:45
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-search-aggs

Copy link
Contributor

@jpountz jpountz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

data nodes in the cluster but at most `256`. In certain scenarios parallelism isn't achieved
through concurrent request such that this protection will result in poor performance. For
instance in an environment where a cluster is formed by a single node holding a large amount of
shards queried at the same time. In such a scenario it makes sense to increase the number to a high value.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe be explicit and mention that there are never two requests running at the same time too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I follow, you mean being explicit that this makes sense only in a single request environment? I think the kibana usecase in general makes sense where concurrently executing request are low but potentially higher than one?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes this is what I meant. I'm fine with relaxing it to say low numbers of concurrent requests if that works better for you?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++

@s1monw s1monw merged commit ffb1a5d into elastic:master Aug 22, 2018
s1monw added a commit that referenced this pull request Aug 22, 2018
Today `_msearch` doesn't allow modifying the `max_concurrent_shard_requests`
per sub search request. This change adds support for setting this parameter on
all sub-search requests in an `_msearch`.

Relates to #31877
@@ -86,6 +86,16 @@ The msearch's `max_concurrent_searches` request parameter can be used to control
the maximum number of concurrent searches the multi search api will execute.
This default is based on the number of data nodes and the default search thread pool size.

The request parameter `max_concurrent_shard_requests` can be used to control the
maximum number of concurrent shard requests the each sub search request will execute.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/the/that ? sorry for being late to the party :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>enhancement :Search/Search Search-related issues that do not fall into other categories v6.5.0 v7.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants