Skip to content

Commit b57fac5

Browse files
raelawangrxin
authored andcommitted
[SPARK-13274] Fix Aggregator Links on GroupedDataset Scala API
Update Aggregator links to point to #org.apache.spark.sql.expressions.Aggregator Author: raela <[email protected]> Closes #11158 from raelawang/master. (cherry picked from commit 719973b) Signed-off-by: Reynold Xin <[email protected]>
1 parent 93f1d91 commit b57fac5

File tree

1 file changed

+8
-4
lines changed

1 file changed

+8
-4
lines changed

sql/core/src/main/scala/org/apache/spark/sql/GroupedDataset.scala

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,8 @@ class GroupedDataset[K, V] private[sql](
101101
*
102102
* This function does not support partial aggregation, and as a result requires shuffling all
103103
* the data in the [[Dataset]]. If an application intends to perform an aggregation over each
104-
* key, it is best to use the reduce function or an [[Aggregator]].
104+
* key, it is best to use the reduce function or an
105+
* [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
105106
*
106107
* Internally, the implementation will spill to disk if any given group is too large to fit into
107108
* memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -129,7 +130,8 @@ class GroupedDataset[K, V] private[sql](
129130
*
130131
* This function does not support partial aggregation, and as a result requires shuffling all
131132
* the data in the [[Dataset]]. If an application intends to perform an aggregation over each
132-
* key, it is best to use the reduce function or an [[Aggregator]].
133+
* key, it is best to use the reduce function or an
134+
* [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
133135
*
134136
* Internally, the implementation will spill to disk if any given group is too large to fit into
135137
* memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -149,7 +151,8 @@ class GroupedDataset[K, V] private[sql](
149151
*
150152
* This function does not support partial aggregation, and as a result requires shuffling all
151153
* the data in the [[Dataset]]. If an application intends to perform an aggregation over each
152-
* key, it is best to use the reduce function or an [[Aggregator]].
154+
* key, it is best to use the reduce function or an
155+
* [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
153156
*
154157
* Internally, the implementation will spill to disk if any given group is too large to fit into
155158
* memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -170,7 +173,8 @@ class GroupedDataset[K, V] private[sql](
170173
*
171174
* This function does not support partial aggregation, and as a result requires shuffling all
172175
* the data in the [[Dataset]]. If an application intends to perform an aggregation over each
173-
* key, it is best to use the reduce function or an [[Aggregator]].
176+
* key, it is best to use the reduce function or an
177+
* [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
174178
*
175179
* Internally, the implementation will spill to disk if any given group is too large to fit into
176180
* memory. However, users must take care to avoid materializing the whole iterator for a group

0 commit comments

Comments
 (0)