Skip to content

Conversation

@gatorsmile
Copy link
Member

What changes were proposed in this pull request?

Currently, we ignore table-specific compression conf when the Hive serde tables are converted to the data source tables. We also ignore it when users set compression in the TBLPROPERTIES clause instead of the OPTIONS clause, even if the tables are native data source tables. This PR is to fix the above issue.

The PR #20087 will make the Hive-serde writer aware of the table-level prop.

The PR #20076 will make Spark aware of the table-specific conf parquet.compression too

How was this patch tested?

Added test cases

@gatorsmile
Copy link
Member Author

cc @cloud-fan

@SparkQA
Copy link

SparkQA commented Dec 30, 2017

Test build #85538 has finished for PR 20120 at commit 7189562.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@gatorsmile
Copy link
Member Author

retest this please

@SparkQA
Copy link

SparkQA commented Dec 30, 2017

Test build #85546 has finished for PR 20120 at commit 7189562.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

I think we should document the difference between table options and properties. AFAIK we added table properties to data source tables since Spark 2.3, and previously table options is the only place for users to put some configs to change some behaviors.

@dongjoon-hyun
Copy link
Member

@gatorsmile . Did you add some document for this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants