-
Notifications
You must be signed in to change notification settings - Fork 28.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-18692][BUILD][DOCS] Test Java 8 unidoc build on Jenkins #17477
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -38,7 +38,7 @@ import org.apache.spark.util.{AccumulatorV2, ThreadUtils, Utils} | |
|
||
/** | ||
* Schedules tasks for multiple types of clusters by acting through a SchedulerBackend. | ||
* It can also work with a local setup by using a [[LocalSchedulerBackend]] and setting | ||
* It can also work with a local setup by using a `LocalSchedulerBackend` and setting | ||
* isLocal to true. It handles common logic, like determining a scheduling order across jobs, waking | ||
* up to launch speculative tasks, etc. | ||
* | ||
|
@@ -704,12 +704,12 @@ private[spark] object TaskSchedulerImpl { | |
* Used to balance containers across hosts. | ||
* | ||
* Accepts a map of hosts to resource offers for that host, and returns a prioritized list of | ||
* resource offers representing the order in which the offers should be used. The resource | ||
* resource offers representing the order in which the offers should be used. The resource | ||
* offers are ordered such that we'll allocate one container on each host before allocating a | ||
* second container on any host, and so on, in order to reduce the damage if a host fails. | ||
* | ||
* For example, given <h1, [o1, o2, o3]>, <h2, [o4]>, <h1, [o5, o6]>, returns | ||
* [o1, o5, o4, 02, o6, o3] | ||
* For example, given {@literal <h1, [o1, o2, o3]>}, {@literal <h2, [o4]>} and | ||
* {@literal <h3, [o5, o6]>}, returns {@literal [o1, o5, o4, o2, o6, o3]}. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It seems we can't use
Scaladoc Javadoc If we use Scaladoc Javadoc This seems not exposed in the API documentation anyway. |
||
*/ | ||
def prioritizeContainers[K, T] (map: HashMap[K, ArrayBuffer[T]]): List[T] = { | ||
val _keyList = new ArrayBuffer[K](map.size) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -230,7 +230,9 @@ class PipelineSuite extends SparkFunSuite with MLlibTestSparkContext with Defaul | |
} | ||
|
||
|
||
/** Used to test [[Pipeline]] with [[MLWritable]] stages */ | ||
/** | ||
* Used to test [[Pipeline]] with `MLWritable` stages | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should avoid inlined comment when there are code blocks ( |
||
*/ | ||
class WritableStage(override val uid: String) extends Transformer with MLWritable { | ||
|
||
final val intParam: IntParam = new IntParam(this, "intParam", "doc") | ||
|
@@ -257,7 +259,9 @@ object WritableStage extends MLReadable[WritableStage] { | |
override def load(path: String): WritableStage = super.load(path) | ||
} | ||
|
||
/** Used to test [[Pipeline]] with non-[[MLWritable]] stages */ | ||
/** | ||
* Used to test [[Pipeline]] with non-`MLWritable` stages | ||
*/ | ||
class UnWritableStage(override val uid: String) extends Transformer { | ||
|
||
final val intParam: IntParam = new IntParam(this, "intParam", "doc") | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -239,7 +239,7 @@ trait MesosSchedulerUtils extends Logging { | |
} | ||
|
||
/** | ||
* Converts the attributes from the resource offer into a Map of name -> Attribute Value | ||
* Converts the attributes from the resource offer into a Map of name to Attribute Value | ||
* The attribute values are the mesos attribute types and they are | ||
* | ||
* @param offerAttributes the attributes offered | ||
|
@@ -296,7 +296,7 @@ trait MesosSchedulerUtils extends Logging { | |
|
||
/** | ||
* Parses the attributes constraints provided to spark and build a matching data struct: | ||
* Map[<attribute-name>, Set[values-to-match]] | ||
* {@literal Map[<attribute-name>, Set[values-to-match]} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same instance with https://github.com/apache/spark/pull/17477/files#r111086455. |
||
* The constraints are specified as ';' separated key-value pairs where keys and values | ||
* are separated by ':'. The ':' implies equality (for singular values) and "is one of" for | ||
* multiple values (comma separated). For example: | ||
|
@@ -354,7 +354,7 @@ trait MesosSchedulerUtils extends Logging { | |
* container overheads. | ||
* | ||
* @param sc SparkContext to use to get `spark.mesos.executor.memoryOverhead` value | ||
* @return memory requirement as (0.1 * <memoryOverhead>) or MEMORY_OVERHEAD_MINIMUM | ||
* @return memory requirement as (0.1 * memoryOverhead) or MEMORY_OVERHEAD_MINIMUM | ||
* (whichever is larger) | ||
*/ | ||
def executorMemory(sc: SparkContext): Int = { | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After this, it produces the documentation as below (manually tested)
Scaladoc
Javadoc
This also seems not exposed to API documentation anyway.