Skip to content

Commit 0bacdaf

Browse files
HyukjinKwoncmonkey
authored andcommitted
[SPARK-3249][DOC] Fix links in ScalaDoc that cause warning messages in sbt/sbt unidoc
## What changes were proposed in this pull request? This PR proposes to fix ambiguous link warnings by simply making them as code blocks for both javadoc and scaladoc. ``` [warn] .../spark/core/src/main/scala/org/apache/spark/Accumulator.scala:20: The link target "SparkContext#accumulator" is ambiguous. Several members fit the target: [warn] .../spark/mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala:281: The link target "runMiniBatchSGD" is ambiguous. Several members fit the target: [warn] .../spark/mllib/src/main/scala/org/apache/spark/mllib/fpm/AssociationRules.scala:83: The link target "run" is ambiguous. Several members fit the target: ... ``` This PR also fixes javadoc8 break as below: ``` [error] .../spark/sql/core/target/java/org/apache/spark/sql/LowPrioritySQLImplicits.java:7: error: reference not found [error] * newProductEncoder - to disambiguate for {link List}s which are both {link Seq} and {link Product} [error] ^ [error] .../spark/sql/core/target/java/org/apache/spark/sql/LowPrioritySQLImplicits.java:7: error: reference not found [error] * newProductEncoder - to disambiguate for {link List}s which are both {link Seq} and {link Product} [error] ^ [error] .../spark/sql/core/target/java/org/apache/spark/sql/LowPrioritySQLImplicits.java:7: error: reference not found [error] * newProductEncoder - to disambiguate for {link List}s which are both {link Seq} and {link Product} [error] ^ [info] 3 errors ``` ## How was this patch tested? Manually via `sbt unidoc > output.txt` and the checked it via `cat output.txt | grep ambiguous` and `sbt unidoc | grep error`. Author: hyukjinkwon <[email protected]> Closes apache#16604 from HyukjinKwon/SPARK-3249.
1 parent 769ba0a commit 0bacdaf

File tree

20 files changed

+93
-86
lines changed

20 files changed

+93
-86
lines changed

core/src/main/scala/org/apache/spark/Accumulator.scala

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,7 @@ package org.apache.spark
2424
* They can be used to implement counters (as in MapReduce) or sums. Spark natively supports
2525
* accumulators of numeric value types, and programmers can add support for new types.
2626
*
27-
* An accumulator is created from an initial value `v` by calling
28-
* [[SparkContext#accumulator SparkContext.accumulator]].
27+
* An accumulator is created from an initial value `v` by calling `SparkContext.accumulator`.
2928
* Tasks running on the cluster can then add to it using the `+=` operator.
3029
* However, they cannot read its value. Only the driver program can read the accumulator's value,
3130
* using its [[#value]] method.

core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)])
166166
* Return a subset of this RDD sampled by key (via stratified sampling) containing exactly
167167
* math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
168168
*
169-
* This method differs from [[sampleByKey]] in that we make additional passes over the RDD to
169+
* This method differs from `sampleByKey` in that we make additional passes over the RDD to
170170
* create a sample size that's exactly equal to the sum of math.ceil(numItems * samplingRate)
171171
* over all key values with a 99.99% confidence. When sampling without replacement, we need one
172172
* additional pass over the RDD to guarantee sample size; when sampling with replacement, we need
@@ -184,7 +184,7 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)])
184184
* Return a subset of this RDD sampled by key (via stratified sampling) containing exactly
185185
* math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
186186
*
187-
* This method differs from [[sampleByKey]] in that we make additional passes over the RDD to
187+
* This method differs from `sampleByKey` in that we make additional passes over the RDD to
188188
* create a sample size that's exactly equal to the sum of math.ceil(numItems * samplingRate)
189189
* over all key values with a 99.99% confidence. When sampling without replacement, we need one
190190
* additional pass over the RDD to guarantee sample size; when sampling with replacement, we need

core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -393,7 +393,7 @@ trait JavaRDDLike[T, This <: JavaRDDLike[T, This]] extends Serializable {
393393
def treeReduce(f: JFunction2[T, T, T], depth: Int): T = rdd.treeReduce(f, depth)
394394

395395
/**
396-
* [[org.apache.spark.api.java.JavaRDDLike#treeReduce]] with suggested depth 2.
396+
* `org.apache.spark.api.java.JavaRDDLike.treeReduce` with suggested depth 2.
397397
*/
398398
def treeReduce(f: JFunction2[T, T, T]): T = treeReduce(f, 2)
399399

@@ -440,7 +440,7 @@ trait JavaRDDLike[T, This <: JavaRDDLike[T, This]] extends Serializable {
440440
}
441441

442442
/**
443-
* [[org.apache.spark.api.java.JavaRDDLike#treeAggregate]] with suggested depth 2.
443+
* `org.apache.spark.api.java.JavaRDDLike.treeAggregate` with suggested depth 2.
444444
*/
445445
def treeAggregate[U](
446446
zeroValue: U,

core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
109109
* functions. This method is here for backward compatibility. It does not provide combiner
110110
* classtag information to the shuffle.
111111
*
112-
* @see [[combineByKeyWithClassTag]]
112+
* @see `combineByKeyWithClassTag`
113113
*/
114114
def combineByKey[C](
115115
createCombiner: V => C,
@@ -127,7 +127,7 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
127127
* This method is here for backward compatibility. It does not provide combiner
128128
* classtag information to the shuffle.
129129
*
130-
* @see [[combineByKeyWithClassTag]]
130+
* @see `combineByKeyWithClassTag`
131131
*/
132132
def combineByKey[C](
133133
createCombiner: V => C,
@@ -608,7 +608,7 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
608608
* existing partitioner/parallelism level. This method is here for backward compatibility. It
609609
* does not provide combiner classtag information to the shuffle.
610610
*
611-
* @see [[combineByKeyWithClassTag]]
611+
* @see `combineByKeyWithClassTag`
612612
*/
613613
def combineByKey[C](
614614
createCombiner: V => C,

core/src/main/scala/org/apache/spark/rdd/RDD.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1877,7 +1877,7 @@ abstract class RDD[T: ClassTag](
18771877
* Defines implicit functions that provide extra functionalities on RDDs of specific types.
18781878
*
18791879
* For example, [[RDD.rddToPairRDDFunctions]] converts an RDD into a [[PairRDDFunctions]] for
1880-
* key-value-pair RDDs, and enabling extra functionalities such as [[PairRDDFunctions.reduceByKey]].
1880+
* key-value-pair RDDs, and enabling extra functionalities such as `PairRDDFunctions.reduceByKey`.
18811881
*/
18821882
object RDD {
18831883

graphx/src/main/scala/org/apache/spark/graphx/Graph.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -331,7 +331,7 @@ abstract class Graph[VD: ClassTag, ED: ClassTag] protected () extends Serializab
331331

332332
/**
333333
* Merges multiple edges between two vertices into a single edge. For correct results, the graph
334-
* must have been partitioned using [[partitionBy]].
334+
* must have been partitioned using `partitionBy`.
335335
*
336336
* @param merge the user-supplied commutative associative function to merge edge attributes
337337
* for duplicate edges.

graphx/src/main/scala/org/apache/spark/graphx/GraphOps.scala

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -428,7 +428,7 @@ class GraphOps[VD: ClassTag, ED: ClassTag](graph: Graph[VD, ED]) extends Seriali
428428
* Compute the connected component membership of each vertex and return a graph with the vertex
429429
* value containing the lowest vertex id in the connected component containing that vertex.
430430
*
431-
* @see [[org.apache.spark.graphx.lib.ConnectedComponents$#run]]
431+
* @see `org.apache.spark.graphx.lib.ConnectedComponents.run`
432432
*/
433433
def connectedComponents(): Graph[VertexId, ED] = {
434434
ConnectedComponents.run(graph)
@@ -438,7 +438,7 @@ class GraphOps[VD: ClassTag, ED: ClassTag](graph: Graph[VD, ED]) extends Seriali
438438
* Compute the connected component membership of each vertex and return a graph with the vertex
439439
* value containing the lowest vertex id in the connected component containing that vertex.
440440
*
441-
* @see [[org.apache.spark.graphx.lib.ConnectedComponents$#run]]
441+
* @see `org.apache.spark.graphx.lib.ConnectedComponents.run`
442442
*/
443443
def connectedComponents(maxIterations: Int): Graph[VertexId, ED] = {
444444
ConnectedComponents.run(graph, maxIterations)

mllib/src/main/scala/org/apache/spark/ml/Pipeline.scala

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -83,11 +83,11 @@ abstract class PipelineStage extends Params with Logging {
8383

8484
/**
8585
* A simple pipeline, which acts as an estimator. A Pipeline consists of a sequence of stages, each
86-
* of which is either an [[Estimator]] or a [[Transformer]]. When [[Pipeline#fit]] is called, the
87-
* stages are executed in order. If a stage is an [[Estimator]], its [[Estimator#fit]] method will
86+
* of which is either an [[Estimator]] or a [[Transformer]]. When `Pipeline.fit` is called, the
87+
* stages are executed in order. If a stage is an [[Estimator]], its `Estimator.fit` method will
8888
* be called on the input dataset to fit a model. Then the model, which is a transformer, will be
8989
* used to transform the dataset as the input to the next stage. If a stage is a [[Transformer]],
90-
* its [[Transformer#transform]] method will be called to produce the dataset for the next stage.
90+
* its `Transformer.transform` method will be called to produce the dataset for the next stage.
9191
* The fitted model from a [[Pipeline]] is a [[PipelineModel]], which consists of fitted models and
9292
* transformers, corresponding to the pipeline stages. If there are no stages, the pipeline acts as
9393
* an identity transformer.
@@ -121,9 +121,9 @@ class Pipeline @Since("1.4.0") (
121121

122122
/**
123123
* Fits the pipeline to the input dataset with additional parameters. If a stage is an
124-
* [[Estimator]], its [[Estimator#fit]] method will be called on the input dataset to fit a model.
124+
* [[Estimator]], its `Estimator.fit` method will be called on the input dataset to fit a model.
125125
* Then the model, which is a transformer, will be used to transform the dataset as the input to
126-
* the next stage. If a stage is a [[Transformer]], its [[Transformer#transform]] method will be
126+
* the next stage. If a stage is a [[Transformer]], its `Transformer.transform` method will be
127127
* called to produce the dataset for the next stage. The fitted model from a [[Pipeline]] is an
128128
* [[PipelineModel]], which consists of fitted models and transformers, corresponding to the
129129
* pipeline stages. If there are no stages, the output model acts as an identity transformer.

mllib/src/main/scala/org/apache/spark/ml/param/params.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -728,7 +728,7 @@ trait Params extends Identifiable with Serializable {
728728
}
729729

730730
/**
731-
* [[extractParamMap]] with no extra values.
731+
* `extractParamMap` with no extra values.
732732
*/
733733
final def extractParamMap(): ParamMap = {
734734
extractParamMap(ParamMap.empty)

mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAModel.scala

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -237,7 +237,7 @@ class LocalLDAModel private[spark] (
237237
vocabSize)
238238

239239
/**
240-
* Java-friendly version of [[logLikelihood]]
240+
* Java-friendly version of `logLikelihood`
241241
*/
242242
@Since("1.5.0")
243243
def logLikelihood(documents: JavaPairRDD[java.lang.Long, Vector]): Double = {
@@ -259,7 +259,9 @@ class LocalLDAModel private[spark] (
259259
-logLikelihood(documents) / corpusTokenCount
260260
}
261261

262-
/** Java-friendly version of [[logPerplexity]] */
262+
/**
263+
* Java-friendly version of `logPerplexity`
264+
*/
263265
@Since("1.5.0")
264266
def logPerplexity(documents: JavaPairRDD[java.lang.Long, Vector]): Double = {
265267
logPerplexity(documents.rdd.asInstanceOf[RDD[(Long, Vector)]])
@@ -365,7 +367,9 @@ class LocalLDAModel private[spark] (
365367
}
366368
}
367369

368-
/** Get a method usable as a UDF for [[topicDistributions()]] */
370+
/**
371+
* Get a method usable as a UDF for `topicDistributions()`
372+
*/
369373
private[spark] def getTopicDistributionMethod(sc: SparkContext): Vector => Vector = {
370374
val expElogbeta = exp(LDAUtils.dirichletExpectation(topicsMatrix.asBreeze.toDenseMatrix.t).t)
371375
val expElogbetaBc = sc.broadcast(expElogbeta)
@@ -414,7 +418,7 @@ class LocalLDAModel private[spark] (
414418
}
415419

416420
/**
417-
* Java-friendly version of [[topicDistributions]]
421+
* Java-friendly version of `topicDistributions`
418422
*/
419423
@Since("1.4.1")
420424
def topicDistributions(

0 commit comments

Comments
 (0)