Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-23881][CORE][TEST] Fix flaky test JobCancellationSuite."interruptible iterator of shuffle reader" #20993

Closed
wants to merge 3 commits into from

Conversation

jiangxb1987
Copy link
Contributor

@jiangxb1987 jiangxb1987 commented Apr 6, 2018

What changes were proposed in this pull request?

The test case JobCancellationSuite."interruptible iterator of shuffle reader" has been flaky because KillTask event is handled asynchronously, so it can happen that the semaphore is released but the task is still running.
Actually we only have to check if the total number of processed elements is less than the input elements number, so we know the task get cancelled.

How was this patch tested?

The new test case still fails without the purposed patch, and succeeded in current master.

@jiangxb1987
Copy link
Contributor Author

cc @cloud-fan @advancedxy @dongjoon-hyun PTAL

// 2. task in reduce stage is blocked as taskCancelledSemaphore is not released until
// JobCancelled event is posted.
// After job being cancelled, task in reduce stage will be cancelled asynchronously, thus
// partial of the inputs should not get processed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thus partial of the inputs should not get processed. ->
It's very unlikely that Spark can process 10000 elements between JobCancelled is posted and task is really killed.

@cloud-fan
Copy link
Contributor

LGTM

@SparkQA
Copy link

SparkQA commented Apr 6, 2018

Test build #88973 has finished for PR 20993 at commit 965a80f.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@advancedxy
Copy link
Contributor

@jiangxb1987 can you post a Jenkins job link referring the flaky test?

@jiangxb1987
Copy link
Contributor Author

jiangxb1987 commented Apr 6, 2018 via email

Copy link
Contributor

@advancedxy advancedxy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for this flaky test.

LGTM except one comment.

// JobCancelled event is posted.
// After job being cancelled, task in reduce stage will be cancelled asynchronously, thus
// partial of the inputs should not get processed (It's very unlikely that Spark can process
// 10000 elements between JobCancelled is posted and task is really killed).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I re-checked the killTask code. I believe there's still possibility(very unlikely) that the reduce task processes all the input elements before task is really killed, then we cannot observe the reduce task being interruptible.

One way to reduce possibility would be increasing the num of input elements. So I believe we should add comments in val numElements = 10000 to make laters know that we choose 10000 for a reason.

@SparkQA
Copy link

SparkQA commented Apr 6, 2018

Test build #88987 has finished for PR 20993 at commit a304f9b.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Apr 9, 2018

Test build #89060 has finished for PR 20993 at commit 2c7d1c7.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@gatorsmile
Copy link
Member

The latest commit is just to add new comments in the code. The previous test already passed.

Merged to master/2.3

asfgit pushed a commit that referenced this pull request Apr 9, 2018
…uptible iterator of shuffle reader"

## What changes were proposed in this pull request?

The test case JobCancellationSuite."interruptible iterator of shuffle reader" has been flaky because `KillTask` event is handled asynchronously, so it can happen that the semaphore is released but the task is still running.
Actually we only have to check if the total number of processed elements is less than the input elements number, so we know the task get cancelled.

## How was this patch tested?

The new test case still fails without the purposed patch, and succeeded in current master.

Author: Xingbo Jiang <[email protected]>

Closes #20993 from jiangxb1987/JobCancellationSuite.

(cherry picked from commit d81f29e)
Signed-off-by: gatorsmile <[email protected]>
@asfgit asfgit closed this in d81f29e Apr 9, 2018
@jiangxb1987 jiangxb1987 deleted the JobCancellationSuite branch April 10, 2018 01:30
peter-toth pushed a commit to peter-toth/spark that referenced this pull request Oct 6, 2018
…uptible iterator of shuffle reader"

The test case JobCancellationSuite."interruptible iterator of shuffle reader" has been flaky because `KillTask` event is handled asynchronously, so it can happen that the semaphore is released but the task is still running.
Actually we only have to check if the total number of processed elements is less than the input elements number, so we know the task get cancelled.

The new test case still fails without the purposed patch, and succeeded in current master.

Author: Xingbo Jiang <[email protected]>

Closes apache#20993 from jiangxb1987/JobCancellationSuite.

(cherry picked from commit d81f29e)
Signed-off-by: gatorsmile <[email protected]>

Change-Id: I595412f8b6e93223ebf780e30f986c7d0fa99c6e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants