Skip to content

Conversation

@c21
Copy link
Contributor

@c21 c21 commented Feb 24, 2021

What changes were proposed in this pull request?

I found out during code review of #31567 (comment), where we can push down limit to the left side of LEFT SEMI and LEFT ANTI join, if the join condition is empty.

Why it's safe to push down limit:

The semantics of LEFT SEMI join without condition:
(1). if right side is non-empty, output all rows from left side.
(2). if right side is empty, output nothing.

The semantics of LEFT ANTI join without condition:
(1). if right side is non-empty, output nothing.
(2). if right side is empty, output all rows from left side.

With the semantics of output all rows from left side or nothing (all or nothing), it's safe to push down limit to left side.
NOTE: LEFT SEMI / LEFT ANTI join with non-empty condition is not safe for limit push down, because output can be a portion of left side rows.

Reference: physical operator implementation for LEFT SEMI / LEFT ANTI join without condition - https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/joins/BroadcastNestedLoopJoinExec.scala#L200-L204 .

Why are the changes needed?

Better performance. Save CPU and IO for these joins, as limit being pushed down before join.

Does this PR introduce any user-facing change?

No.

How was this patch tested?

Added unit test in LimitPushdownSuite.scala and SQLQuerySuite.scala.

@c21
Copy link
Contributor Author

c21 commented Feb 24, 2021

cc @wangyum , @maropu , @viirya , @HyukjinKwon and @cloud-fan for review if you have time, thanks.

@github-actions github-actions bot added the SQL label Feb 24, 2021
left = maybePushLocalLimit(exp, left),
right = maybePushLocalLimit(exp, right))
case LeftSemi | LeftAnti if conditionOpt.isEmpty =>
join.copy(left = maybePushLocalLimit(exp, left))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, in this case, we need the join itself?

scala> sql("select * from l1").show()
+----+
|  id|
+----+
|   1|
|   2|
|null|
+----+


scala> sql("select * from r1").show()
+----+
|  id|
+----+
|   2|
|null|
+----+


scala> sql("select * from l1 left semi join r1").show()
+----+
|  id|
+----+
|   1|
|   2|
|null|
+----+


scala> sql("select * from l1 left anti join r1").show()
+---+
| id|
+---+
+---+

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we still need. Whether to output all rows or nothing, is depending on whether right side is empty, and this can only be known during runtime.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@maropu - this actually reminds me whether we can further optimize during runtime, and I found I already did it for LEFT SEMI with AQE - #29484 . Similarly for LEFT ANTI join without condition, we can convert join logical plan node to an empty relation if right build side is not empty. Will submit a followup PR tomorrow.

In addition, after taking a deep look at BroadcastNestedLoopJoinExec (never looked closely to that because it's not popular in our environment), I found many places that we can optimize:

  • populate outputOrdering and outputPartitioning when possible to avoid shuffle/sort in later stage.
  • shortcut for LEFT SEMI/ANTI in defaultJoin() as we don't need to look through all rows when there's no join condition.
  • code-gen the operator.

I will file an umbrella JIRA with minor priority and do it gradually.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly for LEFT ANTI join without condition, we can convert join logical plan node to an empty relation if right build side is not empty. Will submit a followup PR tomorrow.

Ah, I see. That sounds reasonable. Nice idea, @c21 .

spark.range(5).toDF().repartition(1).write.saveAsTable("left_table")
spark.range(3).write.saveAsTable("nonempty_right_table")
spark.range(0).write.saveAsTable("empty_right_table")
Seq("LEFT SEMI").foreach { joinType =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems LEFT ANTI is missing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cloud-fan - good catch, I was accidentally removing it during debugging, fixed.

@SparkQA
Copy link

SparkQA commented Feb 24, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39982/

@SparkQA
Copy link

SparkQA commented Feb 24, 2021

Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39982/

@SparkQA
Copy link

SparkQA commented Feb 24, 2021

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39989/

@SparkQA
Copy link

SparkQA commented Feb 24, 2021

Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39989/

@cloud-fan
Copy link
Contributor

thanks, merging to master!

@cloud-fan cloud-fan closed this in 6ef57d3 Feb 24, 2021
@c21
Copy link
Contributor Author

c21 commented Feb 24, 2021

Thank you all for review!

@c21 c21 deleted the limit-pushdown branch February 24, 2021 10:46
@SparkQA
Copy link

SparkQA commented Feb 24, 2021

Test build #135409 has finished for PR 31630 at commit 22bfd5e.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

cloud-fan pushed a commit that referenced this pull request Feb 26, 2021
### What changes were proposed in this pull request?

I discovered from review discussion - #31630 (comment) , that we can eliminate LEFT ANTI join (with no join condition) to empty relation, if the right side is known to be non-empty. So with AQE, this is doable similar to #29484 .

### Why are the changes needed?

This can help eliminate the join operator during logical plan optimization.
Before this PR, [left side physical plan `execute()` will be called](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/joins/BroadcastNestedLoopJoinExec.scala#L192), so if left side is complicated (e.g. contain broadcast exchange operator), then some computation would happen. However after this PR, the join operator will be removed during logical plan, and nothing will be computed from left side. Potentially it can save resource for these kinds of query.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Added unit tests for positive and negative queries in `AdaptiveQueryExecSuite.scala`.

Closes #31641 from c21/left-anti-aqe.

Authored-by: Cheng Su <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants