Skip to content

Commit b024a8a

Browse files
yaooqinnHyukjinKwon
authored andcommitted
[MINOR][DOCS] Fix some links for python api doc
### What changes were proposed in this pull request? the link for `partition discovery` is malformed, because for releases, there will contains` /docs/<version>/` in the full URL. ### Why are the changes needed? fix doc ### Does this PR introduce any user-facing change? no ### How was this patch tested? `SKIP_SCALADOC=1 SKIP_RDOC=1 SKIP_SQLDOC=1 jekyll serve` locally verified Closes #28017 from yaooqinn/doc. Authored-by: Kent Yao <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
1 parent 178d472 commit b024a8a

File tree

3 files changed

+4
-14
lines changed

3 files changed

+4
-14
lines changed

python/pyspark/sql/functions.py

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1196,8 +1196,6 @@ def to_date(col, format=None):
11961196
By default, it follows casting rules to :class:`pyspark.sql.types.DateType` if the format
11971197
is omitted. Equivalent to ``col.cast("date")``.
11981198
1199-
.. _datetime pattern: https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html
1200-
12011199
>>> df = spark.createDataFrame([('1997-02-28 10:30:00',)], ['t'])
12021200
>>> df.select(to_date(df.t).alias('date')).collect()
12031201
[Row(date=datetime.date(1997, 2, 28))]
@@ -1221,8 +1219,6 @@ def to_timestamp(col, format=None):
12211219
By default, it follows casting rules to :class:`pyspark.sql.types.TimestampType` if the format
12221220
is omitted. Equivalent to ``col.cast("timestamp")``.
12231221
1224-
.. _datetime pattern: https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html
1225-
12261222
>>> df = spark.createDataFrame([('1997-02-28 10:30:00',)], ['t'])
12271223
>>> df.select(to_timestamp(df.t).alias('dt')).collect()
12281224
[Row(dt=datetime.datetime(1997, 2, 28, 10, 30))]

python/pyspark/sql/readwriter.py

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -253,7 +253,8 @@ def json(self, path, schema=None, primitivesAsString=None, prefersDecimal=None,
253253
:param recursiveFileLookup: recursively scan a directory for files. Using this option
254254
disables `partition discovery`_.
255255
256-
.. _partition discovery: /sql-data-sources-parquet.html#partition-discovery
256+
.. _partition discovery:
257+
https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#partition-discovery
257258
.. _datetime pattern: https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html
258259
259260
>>> df1 = spark.read.json('python/test_support/sql/people.json')
@@ -490,8 +491,6 @@ def csv(self, path, schema=None, sep=None, encoding=None, quote=None, escape=Non
490491
:param recursiveFileLookup: recursively scan a directory for files. Using this option
491492
disables `partition discovery`_.
492493
493-
.. _datetime pattern: https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html
494-
495494
>>> df = spark.read.csv('python/test_support/sql/ages.csv')
496495
>>> df.dtypes
497496
[('_c0', 'string'), ('_c1', 'string')]
@@ -865,8 +864,6 @@ def json(self, path, mode=None, compression=None, dateFormat=None, timestampForm
865864
:param ignoreNullFields: Whether to ignore null fields when generating JSON objects.
866865
If None is set, it uses the default value, ``true``.
867866
868-
.. _datetime pattern: https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html
869-
870867
>>> df.write.json(os.path.join(tempfile.mkdtemp(), 'data'))
871868
"""
872869
self.mode(mode)
@@ -981,8 +978,6 @@ def csv(self, path, mode=None, compression=None, sep=None, quote=None, escape=No
981978
:param lineSep: defines the line separator that should be used for writing. If None is
982979
set, it uses the default value, ``\\n``. Maximum length is 1 character.
983980
984-
.. _datetime pattern: https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html
985-
986981
>>> df.write.csv(os.path.join(tempfile.mkdtemp(), 'data'))
987982
"""
988983
self.mode(mode)

python/pyspark/sql/streaming.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -489,7 +489,8 @@ def json(self, path, schema=None, primitivesAsString=None, prefersDecimal=None,
489489
:param recursiveFileLookup: recursively scan a directory for files. Using this option
490490
disables `partition discovery`_.
491491
492-
.. _partition discovery: /sql-data-sources-parquet.html#partition-discovery
492+
.. _partition discovery:
493+
https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#partition-discovery
493494
.. _datetime pattern: https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html
494495
495496
>>> json_sdf = spark.readStream.json(tempfile.mkdtemp(), schema = sdf_schema)
@@ -725,8 +726,6 @@ def csv(self, path, schema=None, sep=None, encoding=None, quote=None, escape=Non
725726
:param recursiveFileLookup: recursively scan a directory for files. Using this option
726727
disables `partition discovery`_.
727728
728-
.. _datetime pattern: https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html
729-
730729
>>> csv_sdf = spark.readStream.csv(tempfile.mkdtemp(), schema = sdf_schema)
731730
>>> csv_sdf.isStreaming
732731
True

0 commit comments

Comments
 (0)