Skip to content

Commit f35b10a

Browse files
committed
[SPARK-17264][SQL] DataStreamWriter should document that it only supports Parquet for now
## What changes were proposed in this pull request? Clarify that only parquet files are supported by DataStreamWriter now ## How was this patch tested? (Doc build -- no functional changes to test) Author: Sean Owen <[email protected]> Closes #14860 from srowen/SPARK-17264. (cherry picked from commit befab9c) Signed-off-by: Sean Owen <[email protected]>
1 parent 5903257 commit f35b10a

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

python/pyspark/sql/streaming.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -589,7 +589,7 @@ def format(self, source):
589589
590590
.. note:: Experimental.
591591
592-
:param source: string, name of the data source, e.g. 'json', 'parquet'.
592+
:param source: string, name of the data source, which for now can be 'parquet'.
593593
594594
>>> writer = sdf.writeStream.format('json')
595595
"""

sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ final class DataStreamWriter[T] private[sql](ds: Dataset[T]) {
122122

123123
/**
124124
* :: Experimental ::
125-
* Specifies the underlying output data source. Built-in options include "parquet", "json", etc.
125+
* Specifies the underlying output data source. Built-in options include "parquet" for now.
126126
*
127127
* @since 2.0.0
128128
*/

0 commit comments

Comments
 (0)