Skip to content

Conversation

@cloud-fan
Copy link
Contributor

What changes were proposed in this pull request?

There is a performance regression in Spark 2.3. When we read a big compressed text file which is un-splittable(e.g. gz), and then take the first record, Spark will scan all the data in the text file which is very slow. For example, spark.read.text("/tmp/test.csv.gz").head(1), we can check out the SQL UI and see that the file is fully scanned.

image

This is introduced by #18955 , which adds a LocalLimit to the query when executing Dataset.head. The foundamental problem is, Limit is not well whole-stage-codegened. It keeps consuming the input even if we have already hit the limitation.

However, if we just fix LIMIT whole-stage-codegen, the memory leak test will fail, as we don't fully consume the inputs to trigger the resource cleanup.

To fix it completely, we should do the following

  1. fix LIMIT whole-stage-codegen, stop consuming inputs after hitting the limitation.
  2. in whole-stage-codegen, provide a way to release resource of the parant operator, and apply it in LIMIT
  3. automatically release resource when task ends.

Howere this is a non-trivial change, and is risky to backport to Spark 2.3.

This PR proposes to revert #18955 in Spark 2.3. The memory leak is not a big issue. When task ends, Spark will release all the pages allocated by this task, which is kind of releasing most of the resources.

I'll submit a exhaustive fix to master later.

How was this patch tested?

N/A

@cloud-fan
Copy link
Contributor Author

@viirya
Copy link
Member

viirya commented Jun 15, 2018

Hmm, this LGTM, however it didn't catch up 2.3.1 release.

@viirya
Copy link
Member

viirya commented Jun 15, 2018

I take a look at LocalLimit again, seems it won't consume rows once it reaches limit number?

Note: I see. Though LocalLimit doesn't consume rows, but stopEarly is not well called in generated code, so the scan node still consumes rows.

@SparkQA
Copy link

SparkQA commented Jun 15, 2018

Test build #91877 has finished for PR 21573 at commit 2b20b3c.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@hvanhovell
Copy link
Contributor

LGTM - Merging to 2.3. Thanks!

What is the process for master?

asfgit pushed a commit that referenced this pull request Jun 15, 2018
## What changes were proposed in this pull request?

There is a performance regression in Spark 2.3. When we read a big compressed text file which is un-splittable(e.g. gz), and then take the first record, Spark will scan all the data in the text file which is very slow. For example, `spark.read.text("/tmp/test.csv.gz").head(1)`, we can check out the SQL UI and see that the file is fully scanned.

![image](https://user-images.githubusercontent.com/3182036/41445252-264b1e5a-6ffd-11e8-9a67-4c31d129a314.png)

This is introduced by #18955 , which adds a LocalLimit to the query when executing `Dataset.head`. The foundamental problem is, `Limit` is not well whole-stage-codegened. It keeps consuming the input even if we have already hit the limitation.

However, if we just fix LIMIT whole-stage-codegen, the memory leak test will fail, as we don't fully consume the inputs to trigger the resource cleanup.

To fix it completely, we should do the following
1. fix LIMIT whole-stage-codegen, stop consuming inputs after hitting the limitation.
2. in whole-stage-codegen, provide a way to release resource of the parant operator, and apply it in LIMIT
3. automatically release resource when task ends.

Howere this is a non-trivial change, and is risky to backport to Spark 2.3.

This PR proposes to revert #18955 in Spark 2.3. The memory leak is not a big issue. When task ends, Spark will release all the pages allocated by this task, which is kind of releasing most of the resources.

I'll submit a exhaustive fix to master later.

## How was this patch tested?

N/A

Author: Wenchen Fan <[email protected]>

Closes #21573 from cloud-fan/limit.
@dongjoon-hyun
Copy link
Member

Ya, that's too bad to miss this in 2.3.1.

@cloud-fan cloud-fan closed this Jun 15, 2018
@cloud-fan
Copy link
Contributor Author

I'll send one or more PRs to master for the following things

  1. automatically release resource when task ends.
  2. fix LIMIT whole-stage-codegen, stop consuming inputs after hitting the limitation.
  3. in whole-stage-codegen, provide a way to release resource of the parant operator, and apply it in LIMIT

asfgit pushed a commit that referenced this pull request Jul 10, 2018
## What changes were proposed in this pull request?

This is the first follow-up of #21573 , which was only merged to 2.3.

This PR fixes the memory leak in another way: free the `UnsafeExternalMap` when the task ends. All the data buffers in Spark SQL are using `UnsafeExternalMap` and `UnsafeExternalSorter` under the hood, e.g. sort, aggregate, window, SMJ, etc. `UnsafeExternalSorter` registers a task completion listener to free the resource, we should apply the same thing to `UnsafeExternalMap`.

TODO in the next PR:
do not consume all the inputs when having limit in whole stage codegen.

## How was this patch tested?

existing tests

Author: Wenchen Fan <[email protected]>

Closes #21738 from cloud-fan/limit.
otterc pushed a commit to linkedin/spark that referenced this pull request Mar 22, 2023
There is a performance regression in Spark 2.3. When we read a big compressed text file which is un-splittable(e.g. gz), and then take the first record, Spark will scan all the data in the text file which is very slow. For example, `spark.read.text("/tmp/test.csv.gz").head(1)`, we can check out the SQL UI and see that the file is fully scanned.

![image](https://user-images.githubusercontent.com/3182036/41445252-264b1e5a-6ffd-11e8-9a67-4c31d129a314.png)

This is introduced by apache#18955 , which adds a LocalLimit to the query when executing `Dataset.head`. The foundamental problem is, `Limit` is not well whole-stage-codegened. It keeps consuming the input even if we have already hit the limitation.

However, if we just fix LIMIT whole-stage-codegen, the memory leak test will fail, as we don't fully consume the inputs to trigger the resource cleanup.

To fix it completely, we should do the following
1. fix LIMIT whole-stage-codegen, stop consuming inputs after hitting the limitation.
2. in whole-stage-codegen, provide a way to release resource of the parant operator, and apply it in LIMIT
3. automatically release resource when task ends.

Howere this is a non-trivial change, and is risky to backport to Spark 2.3.

This PR proposes to revert apache#18955 in Spark 2.3. The memory leak is not a big issue. When task ends, Spark will release all the pages allocated by this task, which is kind of releasing most of the resources.

I'll submit a exhaustive fix to master later.

N/A

Author: Wenchen Fan <[email protected]>

Closes apache#21573 from cloud-fan/limit.

(cherry picked from commit d3255a5)

RB=1435935
BUG=LIHADOOP-40677
G=superfriends-reviewers
R=fli,mshen,yezhou,edlu
A=yezhou
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants