Skip to content

Conversation

@kevinyu98
Copy link
Contributor

Hello Michael & All:

We have some issues to submit the new codes in the other PR(#10299), so we closed that PR and open this one with the fix.

The reason for the previous failure is that the projection for the scan when there is a filter that is not pushed down (the "left-over" filter) could be different, in elements or ordering, from the original projection.

With this new codes, the approach to solve this problem is:

Insert a new Project if the "left-over" filter is nonempty and (the original projection is not empty and the projection for the scan has more than one elements which could otherwise cause different ordering in projection).

We create 3 test cases to cover the otherwise failure cases.

@kevinyu98
Copy link
Contributor Author

@marmbrus : Can you help take a look at this PR? Thanks for your review.

@marmbrus
Copy link
Contributor

ok to test

@SparkQA
Copy link

SparkQA commented Dec 21, 2015

Test build #48127 has finished for PR 10388 at commit 305739f.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't call toSet. Anything involving attributes set logic should be done with an AttributeSet (which ignores cosmetic differences like capitalization).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Michael: Sure, will make the changes.

@SparkQA
Copy link

SparkQA commented Dec 24, 2015

Test build #48315 has finished for PR 10388 at commit 1e82c45.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@kevinyu98
Copy link
Contributor Author

I delete the test cases from DataFrameNaFunctionsSuite.scala. I checked the previous failure, not sure why it is failed. I worked when I run the local test on my laptop.
$ build/sbt "test-only org.apache.spark.sql.thriftserver"
..
[success] Total time: 296 s, completed Dec 24, 2015 6:11:20 PM

then I re-run the sql test buckets, seems fine.

$ build/sbt sql/test-only

[info] Passed: Total 1522, Failed 0, Errors 0, Passed 1522, Ignored 10
[success] Total time: 146 s, completed Dec 24, 2015 6:26:23 PM

@SparkQA
Copy link

SparkQA commented Dec 25, 2015

Test build #48319 has finished for PR 10388 at commit 7f4a085.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@marmbrus
Copy link
Contributor

Thanks, merging to master.

@asfgit asfgit closed this in fd50df4 Dec 28, 2015
asfgit pushed a commit that referenced this pull request Feb 1, 2016
…uildPartitionedTableScan

Hello Michael & All:

We have some issues to submit the new codes in the other PR(#10299), so we closed that PR and open this one with the fix.

The reason for the previous failure is that the projection for the scan when there is a filter that is not pushed down (the "left-over" filter) could be different, in elements or ordering, from the original projection.

With this new codes, the approach to solve this problem is:

Insert a new Project if the "left-over" filter is nonempty and (the original projection is not empty and the projection for the scan has more than one elements which could otherwise cause different ordering in projection).

We create 3 test cases to cover the otherwise failure cases.

Author: Kevin Yu <[email protected]>

Closes #10388 from kevinyu98/spark-12231.

(cherry picked from commit fd50df4)
Signed-off-by: Cheng Lian <[email protected]>
@liancheng
Copy link
Contributor

Cherry-picked to branch-1.6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants