Skip to content

Conversation

@patmcdonough
Copy link

  • create a new property that sets the minimum amount of heap reserved for the system/application: spark.system.memoryReservedSize
  • account for that value prior to calculating memory available for storage and shuffle
  • set the new property at 300m by default (based on what we are seeing in a local spark-shell running JDK7 with the spark-0.9.0-hadoop-2 binary distribution)
  • increase the default spark.executor.memory beyond 512m since we are going to reserve over half of that for spark itself

This is marked as WIP as testing is still in progress. I was still able to cause an OOM by running a count distinct on the dataset linked to in the JIRA, but bumping spark.system.memoryReservedSize to 350m prevented it.

… portion of the heap for system objects, especilaly in the case of smaller heaps (like the out of the box conifuration for the spark shell).
@AmplabJenkins
Copy link

Can one of the admins verify this patch?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: import Utils instead

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like org.apache.spark.util.collection.Utils is getting in the way. org.apache.spark.util.collection.Utils.takeOrdered is only used in one place, so perhaps it can be removed at some point.

@tdas
Copy link
Contributor

tdas commented Apr 25, 2014

@patmcdonough @pwendell Shouldnt this make it to 1.0?

@sryza
Copy link
Contributor

sryza commented Apr 25, 2014

"spark.system.reservedMemorySize" seems inconsistent with "spark.executor.memory". Something like "spark.executor.system.memory" would maybe be more clear?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sentence reads a little weird. Move "if spark.shuffle.spill is true," to the front?

@sryza
Copy link
Contributor

sryza commented Apr 25, 2014

Oh though spark.system.reservedMemorySize is more consistent with spark.storage.memoryFraction. Maybe just take out "reserved"?

@patmcdonough
Copy link
Author

FYI @pwendell, @tdas, @sryza - just now circling back to this and finding it hasn't made it's way in to 1.0.

It is probably better suited for the next release if it missed the perf testing.

@pwendell
Copy link
Contributor

Hey I spent a lot of time trying to recreate this issue last week to better understand what is causing it. I can do a write-up on the JIRA, but the long-and-short of it is that this error is caused primarily by the issue described in https://issues.apache.org/jira/browse/SPARK-1777. This is a bigger problem and it's not fixed in the general case by the fix here.

The code for Spark itself (i.e. the permgen) is totally outside of the heap space, so the premise here that the code for Spark is taking up heap is not correct. There are extra on-heap datastructures for e.g. ZipFile entries for Spark's classes, but those consume about 50MB in my experiments for both Hadoop1 and Hadoop2 builds (only about 2MB extra for Hadoop2). I tested this by staring a spark shell, manually running a few GC's, and then profiling the heap.

The issue in this case is that the individual partitions are pretty large (50-150MB) compared to the size of the heap and we unroll an entire partition when Spark is already "on the fringe" of available memory. I think adding these extra limits just coincidentally works for this exact input, but won't help other users who are running into this problem.

I also noticed a second issue that with character arrays (produced by textfile), we slightly underestimate the size, which exacerbates this problem. The root cause is unknown at this point - I have a fairly extensive debugging log and I'd like to get the bottom of it after 1.0.

@patmcdonough
Copy link
Author

Thanks for the update. That all lines up very well with what I was seeing
in tests, including the under-estimate of the size as well as the point
where the OOM occurs (while estimating size).

Looking forward to the long-term fix here. It sounds like this PR can be
closed.

On Tue, May 13, 2014 at 6:46 PM, Patrick Wendell
[email protected]:

Hey I spent a lot of time trying to recreate this issue last week to
better understand what is causing it. I can do a write-up on the JIRA, but
the long-and-short of it is that this error is caused primarily by the
issue described in https://issues.apache.org/jira/browse/SPARK-1777. This
is a bigger problem and it's not fixed in the general case by the fix here.

The code for Spark itself (i.e. the permgen) is totally outside of the
heap space, so the premise here that the code for Spark is taking up heap
is not correct. There are extra on-heap datastructures for e.g. ZipFile
entries for Spark's classes, but those consume about 50MB in my experiments
for both Hadoop1 and Hadoop2 builds (only about 2MB extra for Hadoop2). I
tested this by staring a spark shell, manually running a few GC's, and then
profiling the heap.

The issue in this case is that the individual partitions are pretty large
(50-150MB) compared to the size of the heap and we unroll an entire
partition when Spark is already "on the fringe" of available memory. I
think adding these extra limits just coincidentally works for this exact
input, but won't help other users who are running into this problem.

I also noticed a second issue that with character arrays (produced by
textfile), we slightly underestimate the size, which exacerbates this
problem. The root cause is unknown at this point - I have a fairly
extensive debugging log and I'd like to get the bottom of it after 1.0.


Reply to this email directly or view it on GitHubhttps://github.com//pull/377#issuecomment-43033970
.

@ash211
Copy link
Contributor

ash211 commented May 14, 2014

Quote:

I also noticed a second issue that with character arrays (produced by
textfile), we slightly underestimate the size, which exacerbates this
problem. The root cause is unknown at this point - I have a fairly
extensive debugging log and I'd like to get the bottom of it after 1.0.

I've definitely noticed this too, and we now have the common practice of
setting spark.storage.memoryFraction to 0.4 (vs default 0.6) in order to
prevent frequent OOMs when reading from text files.

On Tue, May 13, 2014 at 6:59 PM, Pat McDonough [email protected]:

Thanks for the update. That all lines up very well with what I was seeing
in tests, including the under-estimate of the size as well as the point
where the OOM occurs (while estimating size).

Looking forward to the long-term fix here. It sounds like this PR can be
closed.

On Tue, May 13, 2014 at 6:46 PM, Patrick Wendell
[email protected]:

Hey I spent a lot of time trying to recreate this issue last week to
better understand what is causing it. I can do a write-up on the JIRA,
but
the long-and-short of it is that this error is caused primarily by the
issue described in https://issues.apache.org/jira/browse/SPARK-1777.
This
is a bigger problem and it's not fixed in the general case by the fix
here.

The code for Spark itself (i.e. the permgen) is totally outside of the
heap space, so the premise here that the code for Spark is taking up
heap
is not correct. There are extra on-heap datastructures for e.g. ZipFile
entries for Spark's classes, but those consume about 50MB in my
experiments
for both Hadoop1 and Hadoop2 builds (only about 2MB extra for Hadoop2).
I
tested this by staring a spark shell, manually running a few GC's, and
then
profiling the heap.

The issue in this case is that the individual partitions are pretty
large
(50-150MB) compared to the size of the heap and we unroll an entire
partition when Spark is already "on the fringe" of available memory. I
think adding these extra limits just coincidentally works for this exact
input, but won't help other users who are running into this problem.

I also noticed a second issue that with character arrays (produced by
textfile), we slightly underestimate the size, which exacerbates this
problem. The root cause is unknown at this point - I have a fairly
extensive debugging log and I'd like to get the bottom of it after 1.0.


Reply to this email directly or view it on GitHub<
https://github.com/apache/spark/pull/377#issuecomment-43033970>
.


Reply to this email directly or view it on GitHubhttps://github.com//pull/377#issuecomment-43034662
.

@shivaram
Copy link
Contributor

@ash211 @pwendell Do you have a simple test case to show textFile memory usage being greater than expected ? There might have been some JVM changes which affects the SizeEstimator, but its easier to debug if we have a simple testcase.

@ash211
Copy link
Contributor

ash211 commented May 14, 2014

I don't have one, my apologies. I played around with a generated file like
below to see what I could come up but didn't find anything..

$ perl -e 'print ((("a" x 200) . "\n") x 1000000)' | hadoop fs -put -
/tmp/file.txt
$ hadoop fs -ls /tmp/file.txt
Found 1 items
-rw-r--r--   3 user group  201000000 2014-05-13 23:26 /tmp/file.txt
$ ./bin/spark-shell
scala> val f = sc.textFile("hdfs:///tmp/file.txt").cache
 14/05/13 23:31:38 INFO storage.MemoryStore: ensureFreeSpace(80202) called
with curMem=0, maxMem=206150041
14/05/13 23:31:38 INFO storage.MemoryStore: Block broadcast_0 stored as
values to memory (estimated size 78.3 KB, free 196.5 MB)
f: org.apache.spark.rdd.RDD[String] = MappedRDD[1] at textFile at
<console>:12

scala> f.count
14/05/13 23:31:43 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
14/05/13 23:31:43 INFO lzo.LzoCodec: Successfully loaded & initialized
native-lzo library [hadoop-lzo rev 54497d3f865a6bf89abd843e1d8441f84d844458]
14/05/13 23:31:43 INFO mapred.FileInputFormat: Total input paths to process
: 1
14/05/13 23:31:43 INFO spark.SparkContext: Starting job: count at
<console>:15
14/05/13 23:31:43 INFO scheduler.DAGScheduler: Got job 0 (count at
<console>:15) with 2 output partitions (allowLocal=false)
14/05/13 23:31:43 INFO scheduler.DAGScheduler: Final stage: Stage 0(count
at <console>:15)
14/05/13 23:31:43 INFO scheduler.DAGScheduler: Parents of final stage:
List()
14/05/13 23:31:43 INFO scheduler.DAGScheduler: Missing parents: List()
14/05/13 23:31:43 INFO scheduler.DAGScheduler: Submitting Stage 0
(MappedRDD[1] at textFile at <console>:12), which has no missing parents
14/05/13 23:31:43 INFO scheduler.DAGScheduler: Submitting 2 missing tasks
from Stage 0 (MappedRDD[1] at textFile at <console>:12)
14/05/13 23:31:43 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0
with 2 tasks
14/05/13 23:31:43 INFO scheduler.TaskSetManager: Starting task 0.0:0 as TID
0 on executor 2: machine06.localdomain (NODE_LOCAL)
14/05/13 23:31:43 INFO scheduler.TaskSetManager: Serialized task 0.0:0 as
1777 bytes in 2 ms
14/05/13 23:31:43 INFO scheduler.TaskSetManager: Starting task 0.0:1 as TID
1 on executor 1: machine04.localdomain (NODE_LOCAL)
14/05/13 23:31:43 INFO scheduler.TaskSetManager: Serialized task 0.0:1 as
1777 bytes in 0 ms
14/05/13 23:31:46 INFO storage.BlockManagerInfo: Added rdd_1_1 in memory
on machine04.localdomain:52794 (size: 225.2 MB, free: 12.0 GB)
14/05/13 23:31:46 INFO storage.BlockManagerInfo: Added rdd_1_0 in memory
on machine06.localdomain:2364 (size: 225.2 MB, free: 12.0 GB)
14/05/13 23:31:46 INFO scheduler.TaskSetManager: Finished TID 1 in 2214 ms
on machine04.localdomain (progress: 1/2)
14/05/13 23:31:46 INFO scheduler.DAGScheduler: Completed ResultTask(0, 1)
14/05/13 23:31:46 INFO scheduler.DAGScheduler: Completed ResultTask(0, 0)
14/05/13 23:31:46 INFO scheduler.TaskSetManager: Finished TID 0 in 2279 ms
on machine06.localdomain (progress: 2/2)
14/05/13 23:31:46 INFO scheduler.DAGScheduler: Stage 0 (count at
<console>:15) finished in 2.289 s
14/05/13 23:31:46 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0,
whose tasks have all completed, from pool
14/05/13 23:31:46 INFO spark.SparkContext: Job finished: count at
<console>:15, took 2.351118993 s
res0: Long = 1000000

scala>

On Tue, May 13, 2014 at 9:13 PM, Shivaram Venkataraman <
[email protected]> wrote:

@ash211 https://github.com/ash211 @pwendellhttps://github.com/pwendellDo you have a simple test case to show textFile memory usage being greater
than expected ? There might have been some JVM changes which affects the
SizeEstimator, but its easier to debug if we have a simple testcase.


Reply to this email directly or view it on GitHubhttps://github.com//pull/377#issuecomment-43040456
.

@patmcdonough
Copy link
Author

I put together a test using some files from the wikipedia dump, described in https://issues.apache.org/jira/browse/SPARK-1392, but was using VisualVM to observe actual heap sizes, so not exactly the most simple test case.

@pwendell
Copy link
Contributor

Hey @patmcdonough - by the way, should have lead my comments by thanking you for the good bug report with a nice test case. This made it really easy for me to reproduce this locally and play around.

@patmcdonough
Copy link
Author

@pwendell - you're too kind, it was my pleasure.

@shivaram - to add a bit of info about the toolchain I referred to in my previous comment, I was using the data and commands outlined in the JIRA, creating a remote debug config in IntelliJ, adding the debug options to conf/java-opts in spark, setting a breakpoint somewhere in the size estimation code path, then observing the heap using VisualVM (and the VisualGC plugin). It's probably only about 10 minutes of set-up, but obviously not automated at all.

Unfortunately, I didn't take notes on this, but IIRC, the internal size estimate did not line up with what was actually on the heap as reported by the JVM.

@shivaram
Copy link
Contributor

Thanks @patmcdonough . There is one known source of error in that when we have a large array we sample some items from it for size estimation and then scale it by the number of items. When I find some time, I'll try to see if there are other problems.

@pwendell
Copy link
Contributor

pwendell commented Aug 1, 2014

I think we can close this issue now, it has been fixed by #1165.

bzhaoopenstack pushed a commit to bzhaoopenstack/spark that referenced this pull request Sep 11, 2019
Clean up terraform fusioncloud jobs resource
arjunshroff pushed a commit to arjunshroff/spark that referenced this pull request Nov 24, 2020
turboFei pushed a commit to turboFei/spark that referenced this pull request Nov 6, 2025
…#377)

### What changes were proposed in this pull request?

We found two analyzer rule execution order issues in our internal workloads:
- `CreateStruct.apply` creates `NamePlaceholder` for unresolved `NamedExpression`. However, with certain rule execution order, the `NamedExpression` may be removed (e.g. remove unnecessary `Alias`) before `NamePlaceholder` is resolved, then `NamePlaceholder` can't be resolved anymore.
- UNPIVOT uses `UnresolvedAlias` to wrap `UnresolvedAttribute`. There is a conflict about how to determine the final alias name. If `ResolveAliases` runs first, then `UnresolvedAlias` will be removed and eventually the alias will be `b` for nested column `a.b`. If `ResolveReferences` runs first, then we resolve `a.b` first and then `UnresolvedAlias` will determine the alias as `a.b` not `b`.

This PR fixes the two issues
- `CreateStruct.apply` should determine the field name immediately if the input is `Alias`
- The parser rule for UNPIVOT should follow how we parse SELECT and return `UnresolvedAttribute` directly without the `UnresolvedAlias` wrapper. It's a bit risky to fix the order issue between `ResolveAliases` and `ResolveReferences` as it can change the final query schema, we will save it for later.

### Why are the changes needed?

fix unstable analyzer behavior with different rule execution orders.

### Does this PR introduce _any_ user-facing change?

Yes, some failed queries can run now. The issue for UNPIVOT only affects the error message.

### How was this patch tested?

verified by our internal workloads. The repro query is quite complicated to trigger a certain rule execution order so we won't add tests for it. The fix is quite obvious.

### Was this patch authored or co-authored using generative AI tooling?

no

Closes apache#45718 from cloud-fan/rule.

Authored-by: Wenchen Fan <[email protected]>

Signed-off-by: Dongjoon Hyun <[email protected]>
Co-authored-by: Wenchen Fan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants