@@ -101,7 +101,8 @@ class GroupedDataset[K, V] private[sql](
101101 *
102102 * This function does not support partial aggregation, and as a result requires shuffling all
103103 * the data in the [[Dataset ]]. If an application intends to perform an aggregation over each
104- * key, it is best to use the reduce function or an [[Aggregator ]].
104+ * key, it is best to use the reduce function or an
105+ * [[org.apache.spark.sql.expressions#Aggregator Aggregator ]].
105106 *
106107 * Internally, the implementation will spill to disk if any given group is too large to fit into
107108 * memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -129,7 +130,8 @@ class GroupedDataset[K, V] private[sql](
129130 *
130131 * This function does not support partial aggregation, and as a result requires shuffling all
131132 * the data in the [[Dataset ]]. If an application intends to perform an aggregation over each
132- * key, it is best to use the reduce function or an [[Aggregator ]].
133+ * key, it is best to use the reduce function or an
134+ * [[org.apache.spark.sql.expressions#Aggregator Aggregator ]].
133135 *
134136 * Internally, the implementation will spill to disk if any given group is too large to fit into
135137 * memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -149,7 +151,8 @@ class GroupedDataset[K, V] private[sql](
149151 *
150152 * This function does not support partial aggregation, and as a result requires shuffling all
151153 * the data in the [[Dataset ]]. If an application intends to perform an aggregation over each
152- * key, it is best to use the reduce function or an [[Aggregator ]].
154+ * key, it is best to use the reduce function or an
155+ * [[org.apache.spark.sql.expressions#Aggregator Aggregator ]].
153156 *
154157 * Internally, the implementation will spill to disk if any given group is too large to fit into
155158 * memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -170,7 +173,8 @@ class GroupedDataset[K, V] private[sql](
170173 *
171174 * This function does not support partial aggregation, and as a result requires shuffling all
172175 * the data in the [[Dataset ]]. If an application intends to perform an aggregation over each
173- * key, it is best to use the reduce function or an [[Aggregator ]].
176+ * key, it is best to use the reduce function or an
177+ * [[org.apache.spark.sql.expressions#Aggregator Aggregator ]].
174178 *
175179 * Internally, the implementation will spill to disk if any given group is too large to fit into
176180 * memory. However, users must take care to avoid materializing the whole iterator for a group
0 commit comments