Skip to content

Commit 65dd1d0

Browse files
MaxGekkHyukjinKwon
authored andcommitted
[SPARK-33911][SQL][DOCS][3.0] Update the SQL migration guide about changes in HiveClientImpl
### What changes were proposed in this pull request? Update the SQL migration guide about the changes made by: - #30778 - #30711 ### Why are the changes needed? To inform users about the recent changes in the upcoming releases. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? N/A Closes #30932 from MaxGekk/sql-migr-guide-hiveclientimpl-3.0. Authored-by: Max Gekk <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
1 parent 1445129 commit 65dd1d0

File tree

1 file changed

+10
-0
lines changed

1 file changed

+10
-0
lines changed

docs/sql-migration-guide.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,10 @@ license: |
2626

2727
- In Spark 3.0.2, `IllegalArgumentException` is returned for the incomplete interval literals, e.g. `INTERVAL '1'`, `INTERVAL '1 DAY 2'`, which are invalid. In Spark 3.0.1, these literals result in `NULL`s.
2828

29+
- In Spark 3.0.2, `AnalysisException` is replaced by its sub-classes that are thrown for tables from Hive external catalog in the following situations:
30+
* `ALTER TABLE .. ADD PARTITION` throws `PartitionsAlreadyExistException` if new partition exists already
31+
* `ALTER TABLE .. DROP PARTITION` throws `NoSuchPartitionsException` for not existing partitions
32+
2933
## Upgrading from Spark SQL 3.0 to 3.0.1
3034

3135
- In Spark 3.0, JSON datasource and JSON function `schema_of_json` infer TimestampType from string values if they match to the pattern defined by the JSON option `timestampFormat`. Since version 3.0.1, the timestamp type inference is disabled by default. Set the JSON option `inferTimestamp` to `true` to enable such type inference.
@@ -219,6 +223,12 @@ license: |
219223

220224
* The decimal string representation can be different between Hive 1.2 and Hive 2.3 when using `TRANSFORM` operator in SQL for script transformation, which depends on hive's behavior. In Hive 1.2, the string representation omits trailing zeroes. But in Hive 2.3, it is always padded to 18 digits with trailing zeroes if necessary.
221225

226+
## Upgrading from Spark SQL 2.4.7 to 2.4.8
227+
228+
- In Spark 2.4.8, `AnalysisException` is replaced by its sub-classes that are thrown for tables from Hive external catalog in the following situations:
229+
* `ALTER TABLE .. ADD PARTITION` throws `PartitionsAlreadyExistException` if new partition exists already
230+
* `ALTER TABLE .. DROP PARTITION` throws `NoSuchPartitionsException` for not existing partitions
231+
222232
## Upgrading from Spark SQL 2.4.5 to 2.4.6
223233

224234
- In Spark 2.4.6, the `RESET` command does not reset the static SQL configuration values to the default. It only clears the runtime SQL configuration values.

0 commit comments

Comments
 (0)