-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[WIP] Spark-1392: Add parameter to reserve minimum memory for the system and increase default executor memory #377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… portion of the heap for system objects, especilaly in the case of smaller heaps (like the out of the box conifuration for the spark shell).
|
Can one of the admins verify this patch? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: import Utils instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like org.apache.spark.util.collection.Utils is getting in the way. org.apache.spark.util.collection.Utils.takeOrdered is only used in one place, so perhaps it can be removed at some point.
|
@patmcdonough @pwendell Shouldnt this make it to 1.0? |
|
"spark.system.reservedMemorySize" seems inconsistent with "spark.executor.memory". Something like "spark.executor.system.memory" would maybe be more clear? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sentence reads a little weird. Move "if spark.shuffle.spill is true," to the front?
|
Oh though spark.system.reservedMemorySize is more consistent with spark.storage.memoryFraction. Maybe just take out "reserved"? |
|
Hey I spent a lot of time trying to recreate this issue last week to better understand what is causing it. I can do a write-up on the JIRA, but the long-and-short of it is that this error is caused primarily by the issue described in https://issues.apache.org/jira/browse/SPARK-1777. This is a bigger problem and it's not fixed in the general case by the fix here. The code for Spark itself (i.e. the permgen) is totally outside of the heap space, so the premise here that the code for Spark is taking up heap is not correct. There are extra on-heap datastructures for e.g. ZipFile entries for Spark's classes, but those consume about 50MB in my experiments for both Hadoop1 and Hadoop2 builds (only about 2MB extra for Hadoop2). I tested this by staring a spark shell, manually running a few GC's, and then profiling the heap. The issue in this case is that the individual partitions are pretty large (50-150MB) compared to the size of the heap and we unroll an entire partition when Spark is already "on the fringe" of available memory. I think adding these extra limits just coincidentally works for this exact input, but won't help other users who are running into this problem. I also noticed a second issue that with character arrays (produced by textfile), we slightly underestimate the size, which exacerbates this problem. The root cause is unknown at this point - I have a fairly extensive debugging log and I'd like to get the bottom of it after 1.0. |
|
Thanks for the update. That all lines up very well with what I was seeing Looking forward to the long-term fix here. It sounds like this PR can be On Tue, May 13, 2014 at 6:46 PM, Patrick Wendell
|
|
Quote:
I've definitely noticed this too, and we now have the common practice of On Tue, May 13, 2014 at 6:59 PM, Pat McDonough [email protected]:
|
|
I don't have one, my apologies. I played around with a generated file like On Tue, May 13, 2014 at 9:13 PM, Shivaram Venkataraman <
|
|
I put together a test using some files from the wikipedia dump, described in https://issues.apache.org/jira/browse/SPARK-1392, but was using VisualVM to observe actual heap sizes, so not exactly the most simple test case. |
|
Hey @patmcdonough - by the way, should have lead my comments by thanking you for the good bug report with a nice test case. This made it really easy for me to reproduce this locally and play around. |
|
@pwendell - you're too kind, it was my pleasure. @shivaram - to add a bit of info about the toolchain I referred to in my previous comment, I was using the data and commands outlined in the JIRA, creating a remote debug config in IntelliJ, adding the debug options to conf/java-opts in spark, setting a breakpoint somewhere in the size estimation code path, then observing the heap using VisualVM (and the VisualGC plugin). It's probably only about 10 minutes of set-up, but obviously not automated at all. Unfortunately, I didn't take notes on this, but IIRC, the internal size estimate did not line up with what was actually on the heap as reported by the JVM. |
|
Thanks @patmcdonough . There is one known source of error in that when we have a large array we sample some items from it for size estimation and then scale it by the number of items. When I find some time, I'll try to see if there are other problems. |
|
I think we can close this issue now, it has been fixed by #1165. |
Clean up terraform fusioncloud jobs resource
…#377) ### What changes were proposed in this pull request? We found two analyzer rule execution order issues in our internal workloads: - `CreateStruct.apply` creates `NamePlaceholder` for unresolved `NamedExpression`. However, with certain rule execution order, the `NamedExpression` may be removed (e.g. remove unnecessary `Alias`) before `NamePlaceholder` is resolved, then `NamePlaceholder` can't be resolved anymore. - UNPIVOT uses `UnresolvedAlias` to wrap `UnresolvedAttribute`. There is a conflict about how to determine the final alias name. If `ResolveAliases` runs first, then `UnresolvedAlias` will be removed and eventually the alias will be `b` for nested column `a.b`. If `ResolveReferences` runs first, then we resolve `a.b` first and then `UnresolvedAlias` will determine the alias as `a.b` not `b`. This PR fixes the two issues - `CreateStruct.apply` should determine the field name immediately if the input is `Alias` - The parser rule for UNPIVOT should follow how we parse SELECT and return `UnresolvedAttribute` directly without the `UnresolvedAlias` wrapper. It's a bit risky to fix the order issue between `ResolveAliases` and `ResolveReferences` as it can change the final query schema, we will save it for later. ### Why are the changes needed? fix unstable analyzer behavior with different rule execution orders. ### Does this PR introduce _any_ user-facing change? Yes, some failed queries can run now. The issue for UNPIVOT only affects the error message. ### How was this patch tested? verified by our internal workloads. The repro query is quite complicated to trigger a certain rule execution order so we won't add tests for it. The fix is quite obvious. ### Was this patch authored or co-authored using generative AI tooling? no Closes apache#45718 from cloud-fan/rule. Authored-by: Wenchen Fan <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]> Co-authored-by: Wenchen Fan <[email protected]>
This is marked as WIP as testing is still in progress. I was still able to cause an OOM by running a count distinct on the dataset linked to in the JIRA, but bumping spark.system.memoryReservedSize to 350m prevented it.