-
Notifications
You must be signed in to change notification settings - Fork 28.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-15322][SQL][FOLLOWUP] Use the new long accumulator for old int accumulators. #13434
Conversation
@srowen @andrewor14 I am sorry that I happened to open the same subject's PR twice. Could you please take a look? |
Test build #59722 has finished for PR 13434 at commit
|
@@ -289,8 +288,8 @@ private[sql] case class InMemoryTableScanExec( | |||
sqlContext.getConf("spark.sql.inMemoryTableScanStatistics.enable", "false").toBoolean | |||
|
|||
// Accumulators used for testing purposes | |||
lazy val readPartitions: Accumulator[Int] = sparkContext.accumulator(0) | |||
lazy val readBatches: Accumulator[Int] = sparkContext.accumulator(0) | |||
lazy val readPartitions: LongAccumulator = sparkContext.longAccumulator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not even sure we need to declare the type of these, unless I'm really missing a reason for it
Are these all the other instances of making an |
But LGTM, I don't mind even merging as-is. |
LGTM2 |
@srowen @andrewor14 Please let me double check if there are more. Could you a bit wait please? |
There were some tests in |
Test build #59799 has finished for PR 13434 at commit
|
Merged to master/2.0 |
… accumulators. ## What changes were proposed in this pull request? This PR corrects the remaining cases for using old accumulators. This does not change some old accumulator usages below: - `ImplicitSuite.scala` - Tests dedicated to old accumulator, for implicits with `AccumulatorParam` - `AccumulatorSuite.scala` - Tests dedicated to old accumulator - `JavaSparkContext.scala` - For supporting old accumulators for Java API. - `debug.package.scala` - Usage with `HashSet[String]`. Currently, it seems no implementation for this. I might be able to write an anonymous class for this but I didn't because I think it is not worth writing a lot of codes only for this. - `SQLMetricsSuite.scala` - This uses the old accumulator for checking type boxing. It seems new accumulator does not require type boxing for this case whereas the old one requires (due to the use of generic). ## How was this patch tested? Existing tests cover this. Author: hyukjinkwon <[email protected]> Closes #13434 from HyukjinKwon/accum. (cherry picked from commit 252417f) Signed-off-by: Sean Owen <[email protected]>
What changes were proposed in this pull request?
This PR corrects the remaining cases for using old accumulators.
This does not change some old accumulator usages below:
ImplicitSuite.scala
- Tests dedicated to old accumulator, for implicits withAccumulatorParam
AccumulatorSuite.scala
- Tests dedicated to old accumulatorJavaSparkContext.scala
- For supporting old accumulators for Java API.debug.package.scala
- Usage withHashSet[String]
. Currently, it seems no implementation for this. I might be able to write an anonymous class for this but I didn't because I think it is not worth writing a lot of codes only for this.SQLMetricsSuite.scala
- This uses the old accumulator for checking type boxing. It seems new accumulator does not require type boxing for this case whereas the old one requires (due to the use of generic).How was this patch tested?
Existing tests cover this.