-
-
Notifications
You must be signed in to change notification settings - Fork 599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cluster.initial_master_nodes #746
Comments
{
discovery: {
seed_hosts: es7_unicast_masters
},
cluster: {
# Cluster needs at least 1 initial master when bootstrapping for
# the first time
initial_master_nodes: es7_unicast_masters(join: false).first
}
} you can leave the config, in cases of complete cluster shutdown you will need it again. The trick here is to discover master using chef search(:) API, then sort the hosts and select first one for |
Thank you
Thank you, However, I think I unserstand what you are getting at, I just seem to have a bit of a gap in understanding as to what you are doing. Here is a snippet of my recipe to deploy the config: I tried to narrow this down by doing some node searches but did not get anything back. so not trying to get free chef help or anything but would appreciate a bit more information on how to do this :) |
use chef search API to find all masters, map found masters into array and sort them assign above array to |
Just updated our stack to 7.x and after going through the deprecation I think I have a cluster all setup.
However after a little back and forth with support it seems the cluster setup has changed quite a bit.
https://www.elastic.co/blog/a-new-era-for-cluster-coordination-in-elasticsearch
I figured this out and already have cluster.initial_master_nodes set via configuration item. However I was told this should only be set for the initial deployment of a cluster and that you should take this out after that.
I have poked around but not sure if I have missed something in what the best way is to initially bootstrap a cluster then have this setting removed....
The text was updated successfully, but these errors were encountered: