-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[SPARK-33529][SQL] Handle '__HIVE_DEFAULT_PARTITION__' while resolving V2 partition specs #30482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
3ea40a7
052c017
846589f
a2fc4a3
eadb9a7
687cd6c
4ad95c5
26b83a1
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -243,4 +243,22 @@ class AlterTablePartitionV2SQLSuite extends DatasourceV2SQLBase { | |
| assert(!partTable.partitionExists(expectedPartition)) | ||
| } | ||
| } | ||
|
|
||
| test("SPARK-33529: handle __HIVE_DEFAULT_PARTITION__") { | ||
| val t = "testpart.ns1.ns2.tbl" | ||
| withTable(t) { | ||
| sql(s"CREATE TABLE $t (part0 string) USING foo PARTITIONED BY (part0)") | ||
| val partTable = catalog("testpart") | ||
| .asTableCatalog | ||
| .loadTable(Identifier.of(Array("ns1", "ns2"), "tbl")) | ||
| .asPartitionable | ||
| val expectedPartition = InternalRow.fromSeq(Seq[Any](null)) | ||
|
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
| assert(!partTable.partitionExists(expectedPartition)) | ||
| val partSpec = "PARTITION (part0 = '__HIVE_DEFAULT_PARTITION__')" | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure about it. It's more like a hive specific thing and we should let v2 implementation to decide how to handle null partition values. This should be internal details and shouldn't be exposed to end users.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ok. How can users specify
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. does
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For example, if we have a string partitioned column - how could we distinguish
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The parser should recognize different literals, e.g.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I have checked that.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
It is already Spark specific thing too. Implementations don't see |
||
| sql(s"ALTER TABLE $t ADD $partSpec") | ||
| assert(partTable.partitionExists(expectedPartition)) | ||
| spark.sql(s"ALTER TABLE $t DROP $partSpec") | ||
| assert(!partTable.partitionExists(expectedPartition)) | ||
| } | ||
| } | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed @cloud-fan 's comment #30454 (comment)