You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[SPARK-15916][SQL] JDBC filter push down should respect operator precedence
This PR fixes the problem that the precedence order is messed when pushing where-clause expression to JDBC layer.
**Case 1:**
For sql `select * from table where (a or b) and c`, the where-clause is wrongly converted to JDBC where-clause `a or (b and c)` after filter push down. The consequence is that JDBC may returns less or more rows than expected.
**Case 2:**
For sql `select * from table where always_false_condition`, the result table may not be empty if the JDBC RDD is partitioned using where-clause:
```
spark.read.jdbc(url, table, predicates = Array("partition 1 where clause", "partition 2 where clause"...)
```
Unit test.
This PR also close#13640
Author: hyukjinkwon <[email protected]>
Author: Sean Zhong <[email protected]>
Closes#13743 from clockfly/SPARK-15916.
(cherry picked from commit ebb9a3b)
Signed-off-by: Cheng Lian <[email protected]>
(cherry picked from commit b22b20d)
Signed-off-by: Dongjoon Hyun <[email protected]>
0 commit comments