Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addition of Optional Parameter - allow_Red_cluster to bypass ISM to run on a Red Opensearch Cluster #1189

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

skumarp7
Copy link

@skumarp7 skumarp7 commented Jun 13, 2024

Issue #, if available:

#1127

Description of changes:

This PR involves the provision for ISM to run on a red opensearch cluster.

Behaviour before:

ISM on a Green/Yellow Opensearch Cluster:

ISM

  • When the cluster volume reaches 95% of the available volume, the cluster goes to red state.

ISM on a Red Opensearch Cluster:

ISM-Red

Behaviour Post the Changes:

  • The user has the option to set "allow_red_cluster" for each state where all the operations on that state can run on a red cluster if the "allow_red_cluster" is set to true. This parameter is optional.

  • If the User sets the parameter to "true", the ISM job will be bypassed to run on the particular state on a red Cluster. Any transitions, Actions under that state will be executed.

  • By default, the value of "allow_red_cluster" is false. Hence, the current behaviour of the API is also unchanged if the user doesnt want to bypass ISM to be run on a red cluster.

ISM on a Green/Yellow Opensearch Cluster:

ISM-GY-WITH PARAM


ISM on a Red Opensearch Cluster with the optional parameter:

ISM - 1

  • In case if the cluster goes to red state due to the volume getting consumed more than 95%, the ISM will still execute the delete action because of the param allowRedCluster being set true, which will delete the index which is of size 1 GB. This would help us to get the cluster back to yellow/green state and the ISM will continue to run successfully for the previous stages.

CheckList:

  • Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Signed-off-by: Sanjay Kumar <[email protected]>
@skumarp7
Copy link
Author

Hi ,

Request feedback from the maintainers :)

@skumarp7
Copy link
Author

Hi All,

I was trying to write test for the above feature and facing issues to simulate a red cluster. I tried to bring down the data node to make the cluster go read but since the integTest clusters are immutable, that was not possible.

Is there any other approach that i can make use of to simulate the red cluster ?

@Hailong-am
Copy link
Collaborator

Hi All,

I was trying to write test for the above feature and facing issues to simulate a red cluster. I tried to bring down the data node to make the cluster go read but since the integTest clusters are immutable, that was not possible.

Is there any other approach that i can make use of to simulate the red cluster ?

you can specify allocation policy for a index, An example

PUT test
{
  "settings": {
    "index.routing.allocation.require.size": "big"
  }
}

it will create index test and allocate shards on node with attribute size is big. As we don't have such node, the primary shards will not able to allocate, that will cause cluster health is red.

Signed-off-by: Sanjay Kumar <[email protected]>
Signed-off-by: Sanjay Kumar <[email protected]>
@@ -250,11 +250,6 @@ object ManagedIndexRunner :
@Suppress("ReturnCount", "ComplexMethod", "LongMethod", "ComplexCondition", "NestedBlockDepth")
private suspend fun runManagedIndexConfig(managedIndexConfig: ManagedIndexConfig, jobContext: JobExecutionContext) {
logger.debug("Run job for index ${managedIndexConfig.index}")
// doing a check of local cluster health as we do not want to overload cluster manager node with potentially a lot of calls
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't move this red cluster check down. Because it helps to not making 1 or more calls below which potentially worsen a red cluster.

Instead, we would need to rely on the ManagedIndexConfig which we directly have here. It has a snapshot of the policy. Only when the policy has allow_red_cluster, then we will bypass this check.

The problem is ManagedIndexConfig doesn't know which state or action is running, only metadata knows, so we won't be able to just bypass when specific state or action is running. To do that, more complex logic would be needed to update ManagedIndexConfig with required info...

Copy link
Author

@skumarp7 skumarp7 Jul 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @bowenlan-amzn,

The calls we're making to the cluster are mostly GET operations, to fetch state etc, so I'm not sure how they would worsen a red cluster. I can still try to avoid them, but not sure of the right approach --

This is my understanding of your suggestions. Please correct where required :

  • Move the check to original location
  • Do some parsing of the policy to figure out if "allow_red_cluster" is present anywhere in the policy. If it does not, return like we do in existing code.
  • Else allow to run further
    (At this stage, we don't know for which state this is going to be executed, as we cannot run any get calls).

Challenges:

  • allow_red_cluster flag has been added at state level i.e. some states in a policy can have it enabled true and some might not.
  • After we have allowed it to bypass the initial policy check, we again require a check at the state/action level if parameter is set to 'true or false' to run/skip that state/action.

That means, for a state that didn't have allow_red_cluster true, we would still execute those 1-2 GET calls before skipping it - which again could potentially worsen the cluster.

Ques:

  1. How would be able to get the value of "allow_red_cluster" parameter at each state before even we get that metadata? can you throw some light to help me implement this logic in ManagedIndexConfig?
  2. If we still need to place the check at top level only, we can move the parameter at the policy level itself instead of state level (as that can be fetched from the ManagedIndexConfig).
    That would mean run all state/actions (if flag is true), or run nothing (if flag is false).
    But with this, the concern rises that - any ism action that doesnt support red cluster or harm a red cluster will also be bypassed to run on a red cluster.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The calls we're making to the cluster are mostly GET operations, to fetch state etc, so I'm not sure how they would worsen a red cluster.

Right, these are get calls, but consider this small change will affect all the ISM jobs which can be 10k+ in a cluster, we should still keep the original logic — don't do anything if cluster is red.

That would mean run all state/actions (if flag is true), or run nothing (if flag is false).
But with this, the concern rises that - any ism action that doesnt support red cluster or harm a red cluster will also be bypassed to run on a red cluster.

We don't need new logic in ManagedIndexConfig. ManagedIndexConfig has policy in it

I'm thinking a little hacky solution maybe having sth like allow_red_cluster_after: 3d, then compare the current time with jobEnabledTime which seems to be the time when the job is created. So we can calculate when to start to allow red cluster, before then, we still block execution by red cluster.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @bowenlan-amzn,

I am guessing the idea here being suggested is we give a time-window to the user to decide - such that if the cluster continues to remain in red state for that long, then ism would be allowed to run.

Few follow up questions:

  • Let's say that the policy has triggered the job on day 1 and the cluster state goes to red after 10 days. The job would immediately get triggered on the red-cluster as the difference between the jobEnabledTime and current time would already have exceeded 3 days. In this case, the red cluster will be bypassed regardless of the time duration of 3days set in the parameter and it would not wait in red state for 3d. Is this understanding correct?

  • Is there any way to retrieve information for how many days the cluster state has been in Red?
    Just thinking out loud on whether we could instead check the duration of how much time the cluster state has been red and compare that with allow_red_cluster_after: 3d.


val managedIndexConfig = getExistingManagedIndexConfig(indexName)

val endpoint = "sim-red"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to not use a different index, but just use val indexName = "test-index-2" to simulate the red cluster situation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants