Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MINOR][BUILD] Fix javadoc8 break #17389

Closed
wants to merge 1 commit into from

Conversation

HyukjinKwon
Copy link
Member

@HyukjinKwon HyukjinKwon commented Mar 22, 2017

What changes were proposed in this pull request?

Several javadoc8 breaks have been introduced. This PR proposes fix those instances so that we can build Scala/Java API docs.

[error] .../spark/sql/core/target/java/org/apache/spark/sql/streaming/GroupState.java:6: error: reference not found
[error]  * <code>flatMapGroupsWithState</code> operations on {@link KeyValueGroupedDataset}.
[error]                                                             ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/streaming/GroupState.java:10: error: reference not found
[error]  * Both, <code>mapGroupsWithState</code> and <code>flatMapGroupsWithState</code> in {@link KeyValueGroupedDataset}
[error]                                                                                            ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/streaming/GroupState.java:51: error: reference not found
[error]  *    {@link GroupStateTimeout.ProcessingTimeTimeout}) or event time (i.e.
[error]              ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/streaming/GroupState.java:52: error: reference not found
[error]  *    {@link GroupStateTimeout.EventTimeTimeout}).
[error]              ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/streaming/GroupState.java:158: error: reference not found
[error]  *           Spark SQL types (see {@link Encoder} for more details).
[error]                                          ^
[error] .../spark/mllib/target/java/org/apache/spark/ml/fpm/FPGrowthParams.java:26: error: bad use of '>'
[error]    * Number of partitions (>=1) used by parallel FP-growth. By default the param is not set, and
[error]                            ^
[error] .../spark/sql/core/src/main/java/org/apache/spark/api/java/function/FlatMapGroupsWithStateFunction.java:30: error: reference not found
[error]  * {@link org.apache.spark.sql.KeyValueGroupedDataset#flatMapGroupsWithState(
[error]           ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyValueGroupedDataset.java:211: error: reference not found
[error]    * See {@link GroupState} for more details.
[error]                 ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyValueGroupedDataset.java:232: error: reference not found
[error]    * See {@link GroupState} for more details.
[error]                 ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyValueGroupedDataset.java:254: error: reference not found
[error]    * See {@link GroupState} for more details.
[error]                 ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyValueGroupedDataset.java:277: error: reference not found
[error]    * See {@link GroupState} for more details.
[error]                 ^
[error] .../spark/core/target/java/org/apache/spark/TaskContextImpl.java:10: error: reference not found
[error]  * {@link TaskMetrics} &amp; {@link MetricsSystem} objects are not thread safe.
[error]           ^
[error] .../spark/core/target/java/org/apache/spark/TaskContextImpl.java:10: error: reference not found
[error]  * {@link TaskMetrics} &amp; {@link MetricsSystem} objects are not thread safe.
[error]                                     ^
[info] 13 errors
jekyll 3.3.1 | Error:  Unidoc generation failed

How was this patch tested?

Manually via jekyll build

@@ -224,7 +225,7 @@ trait KeyedState[S] extends LogicalKeyedState[S] {
/**
* Set the timeout duration for this key as a string. For example, "1 hour", "2 days", etc.
*
* @note, ProcessingTimeTimeout must be enabled in `[map/flatmap]GroupsWithStates`.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems @note, produces a warning as below:

[warn] .../spark/sql/core/src/main/scala/org/apache/spark/sql/streaming/KeyedState.scala:239: Tag '@note,' is not recognised

@@ -27,7 +27,7 @@
/**
* ::Experimental::
* Base interface for a map function used in
* {@link org.apache.spark.sql.KeyValueGroupedDataset#flatMapGroupsWithState(
* {@code org.apache.spark.sql.KeyValueGroupedDataset.flatMapGroupsWithState(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2017-03-23 12 32 11

@HyukjinKwon
Copy link
Member Author

HyukjinKwon commented Mar 22, 2017

I would like to to note my observations.

When we introduce a javadoc break, it seems sbt produces other warnings as errors when generating javadoc.
There are currently so many warnings that are printed apprently as errors within generated java codes when there are javadoc breaks.

For example,

with the java code , A.java, below:

/**
* Hi
*/
public class A extends B {
}

if we run javadoc

javadoc A.java

it produces a warning because it does not find B symbol. It seems still generating the documenation fine.

Loading source file A.java...
Constructing Javadoc information...
A.java:4: error: cannot find symbol
public class A extends B {
                       ^
  symbol: class B
Standard Doclet version 1.8.0_45
Building tree for all the packages and classes...
Generating ./A.html...
Generating ./package-frame.html...
Generating ./package-summary.html...
Generating ./package-tree.html...
Generating ./constant-values.html...
Building index for all the packages and classes...
Generating ./overview-tree.html...
Generating ./index-all.html...
Generating ./deprecated-list.html...
Building index for all classes...
Generating ./allclasses-frame.html...
Generating ./allclasses-noframe.html...
Generating ./index.html...
Generating ./help-doc.html...
1 warning

However, if we have a javadoc break in comments as below:

/**
* Hi
* @see B
*/
public class A extends B {
}

this produces an error and warning.

Loading source file A.java...
Constructing Javadoc information...
A.java:5: error: cannot find symbol
public class A extends B {
                       ^
  symbol: class B
Standard Doclet version 1.8.0_45
Building tree for all the packages and classes...
Generating ./A.html...
A.java:3: error: reference not found
* @see B
       ^
Generating ./package-frame.html...
Generating ./package-summary.html...
Generating ./package-tree.html...
Generating ./constant-values.html...
Building index for all the packages and classes...
Generating ./overview-tree.html...
Generating ./index-all.html...
Generating ./deprecated-list.html...
Building index for all classes...
Generating ./allclasses-frame.html...
Generating ./allclasses-noframe.html...
Generating ./index.html...
Generating ./help-doc.html...
1 error
1 warning

and it seems somehow sbt unidoc recognises errors and also warnings as [error] when there are breaks.

It seems okay to just fix [info] 13 errors which are usually generated in Building tree for all the packages and classes... phase.

@HyukjinKwon
Copy link
Member Author

cc @srowen, could you take a look and see if it makes sense?

@HyukjinKwon
Copy link
Member Author

Just to extend a bit more with the actual example,

before this PR

[error] .../spark/core/target/java/org/apache/spark/Accumulator.java:47: error: cannot find symbol

after this PR

[warn] .../spark/core/target/java/org/apache/spark/Accumulator.java:47: error: cannot find symbol

@HyukjinKwon
Copy link
Member Author

Thank you for your approval @srowen.

@SparkQA
Copy link

SparkQA commented Mar 22, 2017

Test build #75053 has finished for PR 17389 at commit a00dd50.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Mar 22, 2017

Test build #3606 has finished for PR 17389 at commit a00dd50.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Mar 23, 2017

Test build #75067 has finished for PR 17389 at commit 88ee198.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@HyukjinKwon
Copy link
Member Author

retest this please

@SparkQA
Copy link

SparkQA commented Mar 23, 2017

Test build #75076 has finished for PR 17389 at commit 88ee198.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@srowen
Copy link
Member

srowen commented Mar 23, 2017

Merged to master

@asfgit asfgit closed this in aefe798 Mar 23, 2017
asfgit pushed a commit that referenced this pull request Apr 12, 2017
## What changes were proposed in this pull request?

This PR proposes to run Spark unidoc to test Javadoc 8 build as Javadoc 8 is easily re-breakable.

There are several problems with it:

- It introduces little extra bit of time to run the tests. In my case, it took 1.5 mins more (`Elapsed :[94.8746569157]`). How it was tested is described in "How was this patch tested?".

- > One problem that I noticed was that Unidoc appeared to be processing test sources: if we can find a way to exclude those from being processed in the first place then that might significantly speed things up.

  (see  joshrosen's [comment](https://issues.apache.org/jira/browse/SPARK-18692?focusedCommentId=15947627&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15947627))

To complete this automated build, It also suggests to fix existing Javadoc breaks / ones introduced by test codes as described above.

There fixes are similar instances that previously fixed. Please refer #15999 and #16013

Note that this only fixes **errors** not **warnings**. Please see my observation #17389 (comment) for spurious errors by warnings.

## How was this patch tested?

Manually via `jekyll build` for building tests. Also, tested via running `./dev/run-tests`.

This was tested via manually adding `time.time()` as below:

```diff
     profiles_and_goals = build_profiles + sbt_goals

     print("[info] Building Spark unidoc (w/Hive 1.2.1) using SBT with these arguments: ",
           " ".join(profiles_and_goals))

+    import time
+    st = time.time()
     exec_sbt(profiles_and_goals)
+    print("Elapsed :[%s]" % str(time.time() - st))
```

produces

```
...
========================================================================
Building Unidoc API Documentation
========================================================================
...
[info] Main Java API documentation successful.
...
Elapsed :[94.8746569157]
...

Author: hyukjinkwon <[email protected]>

Closes #17477 from HyukjinKwon/SPARK-18692.
@HyukjinKwon HyukjinKwon deleted the minor-javadoc8-fix branch January 2, 2018 03:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants