-
Notifications
You must be signed in to change notification settings - Fork 28.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-3359][BUILD][DOCS] More changes to resolve javadoc 8 errors that will help unidoc/genjavadoc compatibility #15999
Changes from 10 commits
405ed58
937de19
2c74b05
00bdc0e
4b4cb9b
69f6615
c30ddfa
831e8e4
cfce0e8
b0c805c
297783d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -238,7 +238,9 @@ class JavaSparkContext(val sc: SparkContext) | |
* }}} | ||
* | ||
* Do | ||
* `JavaPairRDD<String, byte[]> rdd = sparkContext.dataStreamFiles("hdfs://a-hdfs-path")`, | ||
* <code> | ||
* JavaPairRDD<String, byte[]> rdd = sparkContext.dataStreamFiles("hdfs://a-hdfs-path") | ||
* </code>, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
So, I had to use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. note to myself. It still prints the codes as above. If we want to use
rather than backticks or |
||
* | ||
* then `rdd` contains | ||
* {{{ | ||
|
@@ -270,7 +272,9 @@ class JavaSparkContext(val sc: SparkContext) | |
* }}} | ||
* | ||
* Do | ||
* `JavaPairRDD<String, byte[]> rdd = sparkContext.dataStreamFiles("hdfs://a-hdfs-path")`, | ||
* <code> | ||
* JavaPairRDD<String, byte[]> rdd = sparkContext.dataStreamFiles("hdfs://a-hdfs-path") | ||
* </code>, | ||
* | ||
* then `rdd` contains | ||
* {{{ | ||
|
@@ -749,7 +753,7 @@ class JavaSparkContext(val sc: SparkContext) | |
|
||
/** | ||
* Get a local property set in this thread, or null if it is missing. See | ||
* [[org.apache.spark.api.java.JavaSparkContext.setLocalProperty]]. | ||
* `org.apache.spark.api.java.JavaSparkContext.setLocalProperty`. | ||
*/ | ||
def getLocalProperty(key: String): String = sc.getLocalProperty(key) | ||
|
||
|
@@ -769,7 +773,7 @@ class JavaSparkContext(val sc: SparkContext) | |
* Application programmers can use this method to group all those jobs together and give a | ||
* group description. Once set, the Spark web UI will associate such jobs with this group. | ||
* | ||
* The application can also use [[org.apache.spark.api.java.JavaSparkContext.cancelJobGroup]] | ||
* The application can also use `org.apache.spark.api.java.JavaSparkContext.cancelJobGroup` | ||
* to cancel all running jobs in this group. For example, | ||
* {{{ | ||
* // In the main thread: | ||
|
@@ -802,7 +806,7 @@ class JavaSparkContext(val sc: SparkContext) | |
|
||
/** | ||
* Cancel active jobs for the specified group. See | ||
* [[org.apache.spark.api.java.JavaSparkContext.setJobGroup]] for more information. | ||
* `org.apache.spark.api.java.JavaSparkContext.setJobGroup` for more information. | ||
*/ | ||
def cancelJobGroup(groupId: String): Unit = sc.cancelJobGroup(groupId) | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1673,8 +1673,8 @@ private[spark] object Utils extends Logging { | |
} | ||
|
||
/** | ||
* NaN-safe version of [[java.lang.Double.compare()]] which allows NaN values to be compared | ||
* according to semantics where NaN == NaN and NaN > any non-NaN double. | ||
* NaN-safe version of `java.lang.Double.compare()` which allows NaN values to be compared | ||
* according to semantics where NaN == NaN and NaN > any non-NaN double. | ||
*/ | ||
def nanSafeCompareDoubles(x: Double, y: Double): Int = { | ||
val xIsNan: Boolean = java.lang.Double.isNaN(x) | ||
|
@@ -1687,8 +1687,8 @@ private[spark] object Utils extends Logging { | |
} | ||
|
||
/** | ||
* NaN-safe version of [[java.lang.Float.compare()]] which allows NaN values to be compared | ||
* according to semantics where NaN == NaN and NaN > any non-NaN float. | ||
* NaN-safe version of `java.lang.Float.compare()` which allows NaN values to be compared | ||
* according to semantics where NaN == NaN and NaN > any non-NaN float. | ||
*/ | ||
def nanSafeCompareFloats(x: Float, y: Float): Int = { | ||
val xIsNan: Boolean = java.lang.Float.isNaN(x) | ||
|
@@ -2354,7 +2354,7 @@ private[spark] object Utils extends Logging { | |
* A spark url (`spark://host:port`) is a special URI that its scheme is `spark` and only contains | ||
* host and port. | ||
* | ||
* @throws SparkException if `sparkUrl` is invalid. | ||
* @note Throws `SparkException` if sparkUrl is invalid. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I am not too sure using |
||
*/ | ||
def extractHostPortFromSparkUrl(sparkUrl: String): (String, Int) = { | ||
try { | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For hyperlinks, it seems
these are fine (try it):