Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-23408][SS] Synchronize successive AddData actions in Streaming*JoinSuite #20650

Closed
wants to merge 3 commits into from

Conversation

tdas
Copy link
Contributor

@tdas tdas commented Feb 21, 2018

The best way to review this PR is to ignore whitespace/indent changes. Use this link - https://github.com/apache/spark/pull/20650/files?w=1

What changes were proposed in this pull request?

The stream-stream join tests add data to multiple sources and expect it all to show up in the next batch. But there's a race condition; the new batch might trigger when only one of the AddData actions has been reached.

Prior attempt to solve this issue by @jose-torres in #20646 attempted to simultaneously synchronize on all memory sources together when consecutive AddData was found in the actions. However, this carries the risk of deadlock as well as unintended modification of stress tests (see the above PR for a detailed explanation). Instead, this PR attempts the following.

  • A new action called StreamProgressBlockedActions that allows multiple actions to be executed while the streaming query is blocked from making progress. This allows data to be added to multiple sources that are made visible simultaneously in the next batch.
  • An alias of StreamProgressBlockedActions called MultiAddData is explicitly used in the Streaming*JoinSuites to add data to two memory sources simultaneously.

This should avoid unintentional modification of the stress tests (or any other test for that matter) while making sure that the flaky tests are deterministic.

How was this patch tested?

Modified test cases in Streaming*JoinSuites where there are consecutive AddData actions.

@@ -543,6 +543,15 @@ abstract class StreamExecution(
Option(name).map(_ + "<br/>").getOrElse("") +
s"id = $id<br/>runId = $runId<br/>batch = $batchDescription"
}

private[sql] def withProgressLocked(f: => Unit): Unit = {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: Add docs.

@@ -102,6 +102,14 @@ trait StreamTest extends QueryTest with SharedSQLContext with TimeLimits with Be
AddDataMemory(source, data)
}

object MultiAddData {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: add docs.

@@ -217,6 +225,14 @@ trait StreamTest extends QueryTest with SharedSQLContext with TimeLimits with Be
s"ExpectFailure[${causeClass.getName}, isFatalError: $isFatalError]"
}

case class StreamProgressLockedActions(actions: Seq[StreamAction], desc: String = null)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: add docs.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #87585 has finished for PR 20650 at commit b4c3c55.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #4109 has finished for PR 20650 at commit b4c3c55.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #4111 has finished for PR 20650 at commit b4c3c55.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #4105 has finished for PR 20650 at commit b4c3c55.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #4108 has finished for PR 20650 at commit b4c3c55.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #4106 has finished for PR 20650 at commit b4c3c55.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #4110 has finished for PR 20650 at commit b4c3c55.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #4113 has finished for PR 20650 at commit b4c3c55.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #4107 has finished for PR 20650 at commit b4c3c55.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #4112 has finished for PR 20650 at commit b4c3c55.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #4104 has finished for PR 20650 at commit b4c3c55.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 21, 2018

Test build #4114 has finished for PR 20650 at commit b4c3c55.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@jose-torres
Copy link
Contributor

I'm not sure I agree with all the comments on the previous PR, but I agree that this also works.

As discussed, the downside to this approach is that people in the future can continue to write the same kind of flaky tests this PR fixes. Ideally I'd like to see some kind of story for how people will know they must use MultiAddData.

@tdas
Copy link
Contributor Author

tdas commented Feb 21, 2018

Yes, this is indeed a slight downside. Only time people should choose to use it if they want to add data in multiple sources that are to be visible in the batch. In our case, we need to add data in multiple sources in the same batch because we want to verify the number of state rows changed.

@tdas
Copy link
Contributor Author

tdas commented Feb 21, 2018

@zsxwing can you also take a look?

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #87596 has finished for PR 20650 at commit 7b78fa1.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #87597 has finished for PR 20650 at commit fdcf716.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #4119 has finished for PR 20650 at commit fdcf716.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #4116 has finished for PR 20650 at commit fdcf716.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #4125 has finished for PR 20650 at commit fdcf716.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #4121 has finished for PR 20650 at commit fdcf716.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #4120 has finished for PR 20650 at commit fdcf716.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #4117 has finished for PR 20650 at commit fdcf716.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #4124 has finished for PR 20650 at commit fdcf716.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #4118 has finished for PR 20650 at commit fdcf716.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #4123 has finished for PR 20650 at commit fdcf716.

  • This patch fails PySpark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 22, 2018

Test build #4122 has finished for PR 20650 at commit fdcf716.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@tdas
Copy link
Contributor Author

tdas commented Feb 22, 2018

All the failures above can be attributed to other flakiness unrelated to the flakiness this PR trying to address.

@jose-torres
Copy link
Contributor

LGTM

@zsxwing
Copy link
Member

zsxwing commented Feb 23, 2018 via email

@asfgit asfgit closed this in 855ce13 Feb 23, 2018
@gatorsmile
Copy link
Member

The StreamingJoinSuite in Spark 2.3 are pretty flaky. Do you think we can backport this to 2.3?

@dongjoon-hyun
Copy link
Member

+1 for @gatorsmile 's opinion. That will be very helpful if we can.

@mgaido91
Copy link
Contributor

+1 too, tests are too flaky

@mgaido91
Copy link
Contributor

mgaido91 commented Jan 4, 2019

any luck with this backport? Branch 2.3 is still very flaky (see #23450).

HeartSaVioR pushed a commit to HeartSaVioR/spark that referenced this pull request Feb 11, 2019
…*JoinSuite

**The best way to review this PR is to ignore whitespace/indent changes. Use this link - https://github.com/apache/spark/pull/20650/files?w=1**

The stream-stream join tests add data to multiple sources and expect it all to show up in the next batch. But there's a race condition; the new batch might trigger when only one of the AddData actions has been reached.

Prior attempt to solve this issue by jose-torres in apache#20646 attempted to simultaneously synchronize on all memory sources together when consecutive AddData was found in the actions. However, this carries the risk of deadlock as well as unintended modification of stress tests (see the above PR for a detailed explanation). Instead, this PR attempts the following.

- A new action called `StreamProgressBlockedActions` that allows multiple actions to be executed while the streaming query is blocked from making progress. This allows data to be added to multiple sources that are made visible simultaneously in the next batch.
- An alias of `StreamProgressBlockedActions` called `MultiAddData` is explicitly used in the `Streaming*JoinSuites` to add data to two memory sources simultaneously.

This should avoid unintentional modification of the stress tests (or any other test for that matter) while making sure that the flaky tests are deterministic.

Modified test cases in `Streaming*JoinSuites` where there are consecutive `AddData` actions.

Author: Tathagata Das <[email protected]>

Closes apache#20650 from tdas/SPARK-23408.

NOTE: Modified a bit to cover DSv2 incompatibility between Spark 2.3 and 2.4 by Jungtaek Lim <[email protected]>
 * StreamingDataSourceV2Relation is a class for 2.3, whereas it is a case class for 2.4
srowen pushed a commit that referenced this pull request Feb 12, 2019
…in Streaming*JoinSuite

## What changes were proposed in this pull request?

**The best way to review this PR is to ignore whitespace/indent changes. Use this link - https://github.com/apache/spark/pull/20650/files?w=1**

The stream-stream join tests add data to multiple sources and expect it all to show up in the next batch. But there's a race condition; the new batch might trigger when only one of the AddData actions has been reached.

Prior attempt to solve this issue by jose-torres in #20646 attempted to simultaneously synchronize on all memory sources together when consecutive AddData was found in the actions. However, this carries the risk of deadlock as well as unintended modification of stress tests (see the above PR for a detailed explanation). Instead, this PR attempts the following.

- A new action called `StreamProgressBlockedActions` that allows multiple actions to be executed while the streaming query is blocked from making progress. This allows data to be added to multiple sources that are made visible simultaneously in the next batch.
- An alias of `StreamProgressBlockedActions` called `MultiAddData` is explicitly used in the `Streaming*JoinSuites` to add data to two memory sources simultaneously.

This should avoid unintentional modification of the stress tests (or any other test for that matter) while making sure that the flaky tests are deterministic.

NOTE: This patch is modified a bit from origin PR (#20650) to cover DSv2 incompatibility between Spark 2.3 and 2.4: StreamingDataSourceV2Relation is a class for 2.3, whereas it is a case class for 2.4

## How was this patch tested?

Modified test cases in `Streaming*JoinSuites` where there are consecutive `AddData` actions.

Closes #23757 from HeartSaVioR/fix-streaming-join-test-flakiness-branch-2.3.

Lead-authored-by: Jungtaek Lim (HeartSaVioR) <[email protected]>
Co-authored-by: Tathagata Das <[email protected]>
Signed-off-by: Sean Owen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants