Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All primary shards on the same node for every index in 6.x #29437

Closed
tdoman opened this issue Apr 9, 2018 · 3 comments
Closed

All primary shards on the same node for every index in 6.x #29437

tdoman opened this issue Apr 9, 2018 · 3 comments
Labels
:Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes)

Comments

@tdoman
Copy link

tdoman commented Apr 9, 2018

ES 6.1.1
NEST 6.0.1
3 node cluster, 2 replicas for each index

In looking for the cause of a "hot" node, I read a lot about how "updates" can cause this since they need to be coordinated via the primary shard. In my cluster, I have 34 indexes, most w/ 5 shards, and some "user" indexes w/ 20 shards. Each and every primary shard for each and every index is on the same node which means that every update request that I make has to be handled by this node.

As I perused similar Q&A that I found via web searches and reviewing Cluster Level Shard Allocation, I couldn't see that there was any way to redistribute the primary shards when we're "fully replicated" as we are. So, I tried an experiment by setting one of my indexes to 1 replica. Sure enough, now a couple of the primary shards are on another node. So, then I set it back to 2 replicas. Sure enough, the primaries stayed where they had been moved after changing to 1 replica. So, that's a cheat. Is there a way for me to more explicitly distribute the "primary" designation for shards across the nodes in my cluster?

Some of the most highly updated indexes are ones where we use terms lookup where ES best practices dictate we have a replica so that the file system cache can be utilized so ES doesn't have to request the terms from another replica. Ok, so we have 2 replicas but I also need to balance out the update traffic and I don't know if my little "back and forth" trick is even persistent (ie. our windows VMs auto update in a staggered way). FWIW, I've been referred to Elasticsearch Versioning Support which we can consider and potentially implement (assuming the C# NEST library supports) but I'm definitely not sure we'll even want to do that. Regardless, of course, we also have a running production cluster we need to keep performant in the meantime.

@tdoman tdoman changed the title All primary shards on one node for every index in 6.x All primary shards on the same node for every index in 6.x Apr 9, 2018
@jasontedor
Copy link
Member

Elastic provides a forum for asking general questions and instead prefers to use GitHub only for verified bug reports and feature requests. There's an active community there that should be able to help get an answer to your question. As such, I hope you don't mind that I close this.

@tdoman
Copy link
Author

tdoman commented Apr 10, 2018

@jasontedor I don't mind if this isn't, in fact, a bug. As I described above, the primaries move around if I change the number of replicas. Thus, feels like a bug to me. In addition, if the primaries can, in fact, move as I've demonstrated, it'd be great to be able to assign them via command. The "exception" case for updates makes this a very desirable feature if it's not a bug and, as I said, I've discovered I'm far from the only one struggling with this issue (eg. #12279, #29436, and many others in the discussion forums). Feature or bug, this seemed quite the appropriate place for this report and\or discussion of workaround.

@DaveCTurner DaveCTurner added the :Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) label Apr 10, 2018
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes)
Projects
None yet
Development

No branches or pull requests

4 participants