Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions python/pyspark/sql/group.py
Original file line number Diff line number Diff line change
Expand Up @@ -169,11 +169,11 @@ def sum(self, *cols):

@since(1.6)
def pivot(self, pivot_col, values=None):
"""Pivots a column of the current DataFrame and preform the specified aggregation.
"""Pivots a column of the current DataFrame and perform the specified aggregation.

:param pivot_col: Column to pivot
:param values: Optional list of values of pivotColumn that will be translated to columns in
the output data frame. If values are not provided the method with do an immediate call
:param values: Optional list of values of pivot column that will be translated to columns in
the output DataFrame. If values are not provided the method will do an immediate call
to .distinct() on the pivot column.

>>> df4.groupBy("year").pivot("course", ["dotNET", "Java"]).sum("earnings").collect()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,7 @@ class GroupedData protected[sql](
}

/**
* Pivots a column of the current [[DataFrame]] and preform the specified aggregation.
* Pivots a column of the current [[DataFrame]] and perform the specified aggregation.
* There are two versions of pivot function: one that requires the caller to specify the list
* of distinct values to pivot on, and one that does not. The latter is more concise but less
* efficient, because Spark needs to first compute the list of distinct values internally.
Expand Down Expand Up @@ -321,7 +321,7 @@ class GroupedData protected[sql](
}

/**
* Pivots a column of the current [[DataFrame]] and preform the specified aggregation.
* Pivots a column of the current [[DataFrame]] and perform the specified aggregation.
* There are two versions of pivot function: one that requires the caller to specify the list
* of distinct values to pivot on, and one that does not. The latter is more concise but less
* efficient, because Spark needs to first compute the list of distinct values internally.
Expand Down Expand Up @@ -353,7 +353,7 @@ class GroupedData protected[sql](
}

/**
* Pivots a column of the current [[DataFrame]] and preform the specified aggregation.
* Pivots a column of the current [[DataFrame]] and perform the specified aggregation.
* There are two versions of pivot function: one that requires the caller to specify the list
* of distinct values to pivot on, and one that does not. The latter is more concise but less
* efficient, because Spark needs to first compute the list of distinct values internally.
Expand Down