Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-19659][Core]Disable spark.reducer.maxReqSizeShuffleToMem #18467

Closed
wants to merge 4 commits into from

Conversation

zsxwing
Copy link
Member

@zsxwing zsxwing commented Jun 29, 2017

What changes were proposed in this pull request?

Disable spark.reducer.maxReqSizeShuffleToMem because it breaks the old shuffle service.

Credits to @wangyum

Closes #18466

How was this patch tested?

Jenkins

@zsxwing
Copy link
Member Author

zsxwing commented Jun 29, 2017

cc @JoshRosen As it's impossible to safely revert #16989, I just changed the default value to Long.MaxValue.

@@ -529,14 +529,6 @@ Apart from these, the following properties are also available, and may be useful
</td>
</tr>
<tr>
<td><code>spark.reducer.maxReqSizeShuffleToMem</code></td>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't document it as it's not a safe feature now.

@SparkQA
Copy link

SparkQA commented Jun 29, 2017

Test build #78924 has finished for PR 18467 at commit d49e3f1.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@@ -326,7 +326,7 @@ package object config {
.doc("The blocks of a shuffle request will be fetched to disk when size of the request is " +
"above this threshold. This is to avoid a giant request takes too much memory.")
.bytesConf(ByteUnit.BYTE)
.createWithDefaultString("200m")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want to put a .internal() here.

Copy link
Contributor

@JoshRosen JoshRosen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall. This seems like the cleanest fix for now to unblock the RC.

I do think this feature is valuable so we should see if there's a way to support it without requiring any backend changes to the shuffle service.

Let's make sure to update the "Fix Version" on the JIRA which added this feature so that it's clear that we've not shipped it in 2.2.x.

@zsxwing zsxwing changed the title [SPARK-21253][Core]Disable spark.reducer.maxReqSizeShuffleToMem [SPARK-19659][Core]Disable spark.reducer.maxReqSizeShuffleToMem Jun 30, 2017
@asfgit asfgit closed this in 80f7ac3 Jun 30, 2017
@cloud-fan
Copy link
Contributor

thanks, merging to master/2.2!

asfgit pushed a commit that referenced this pull request Jun 30, 2017
Disable spark.reducer.maxReqSizeShuffleToMem because it breaks the old shuffle service.

Credits to wangyum

Closes #18466

Jenkins

Author: Shixiong Zhu <[email protected]>
Author: Yuming Wang <[email protected]>

Closes #18467 from zsxwing/SPARK-21253.

(cherry picked from commit 80f7ac3)
Signed-off-by: Wenchen Fan <[email protected]>
@zsxwing zsxwing deleted the SPARK-21253 branch June 30, 2017 03:05
@cloud-fan
Copy link
Contributor

@JoshRosen I'd argue that this feature is shipped in 2.2.x, as users can change the config value to enable it. Though this is kind of an experimental feature that breaks old shuffle service.

@cloud-fan
Copy link
Contributor

Oops, @wangyum sorry I forgot to set the author name... will be more careful next time :)

@SparkQA
Copy link

SparkQA commented Jun 30, 2017

Test build #78943 has finished for PR 18467 at commit b8f022e.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants