Skip to content

Conversation

@andrewor14
Copy link
Contributor

The existing spark.memory.fraction (default 0.75) gives the system 25% of the space to work with. For small heaps, this is not enough: e.g. default 1GB leaves only 250MB system memory. This is especially a problem in local mode, where the driver and executor are crammed in the same JVM. Members of the community have reported driver OOM's in such cases.

New proposal. We now reserve 300MB before taking the 75%. For 1GB JVMs, this leaves (1024 - 300) * 0.75 = 543MB for execution and storage. This is proposal (1) listed in the JIRA.

The new space used by storage and execution will be calculated
by (JVM size - 300MB) * 75%, the `spark.memory.fraction`.
@andrewor14
Copy link
Contributor Author

@davies @rxin

@davies
Copy link
Contributor

davies commented Dec 2, 2015

LGTM

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you should document what this is and update the classdoc for UnifiedMemoryManager

@SparkQA
Copy link

SparkQA commented Dec 2, 2015

Test build #2141 has started for PR 10081 at commit 1571877.

@SparkQA
Copy link

SparkQA commented Dec 2, 2015

Test build #2140 has started for PR 10081 at commit 1571877.

@SparkQA
Copy link

SparkQA commented Dec 2, 2015

Test build #2139 has finished for PR 10081 at commit 1571877.

  • This patch fails PySpark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Dec 2, 2015

Test build #47025 has finished for PR 10081 at commit 2e9bc4d.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@andrewor14
Copy link
Contributor Author

Thanks I'm merging into master 1.6

asfgit pushed a commit that referenced this pull request Dec 2, 2015
The existing `spark.memory.fraction` (default 0.75) gives the system 25% of the space to work with. For small heaps, this is not enough: e.g. default 1GB leaves only 250MB system memory. This is especially a problem in local mode, where the driver and executor are crammed in the same JVM. Members of the community have reported driver OOM's in such cases.

**New proposal.** We now reserve 300MB before taking the 75%. For 1GB JVMs, this leaves `(1024 - 300) * 0.75 = 543MB` for execution and storage. This is proposal (1) listed in the [JIRA](https://issues.apache.org/jira/browse/SPARK-12081).

Author: Andrew Or <[email protected]>

Closes #10081 from andrewor14/unified-memory-small-heaps.

(cherry picked from commit d96f8c9)
Signed-off-by: Andrew Or <[email protected]>
@asfgit asfgit closed this in d96f8c9 Dec 2, 2015
@SparkQA
Copy link

SparkQA commented Dec 2, 2015

Test build #47014 has finished for PR 10081 at commit 1571877.

  • This patch fails from timeout after a configured wait of 250m.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For non-small heap, the above memory doesn't need to be reserved, right ?

@andrewor14 andrewor14 deleted the unified-memory-small-heaps branch December 2, 2015 18:45
val minSystemMemory = reservedMemory * 1.5
if (systemMemory < minSystemMemory) {
throw new IllegalArgumentException(s"System memory $systemMemory must " +
s"be at least $minSystemMemory. Please use a larger heap size.")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comes out as scientific notation, which made me laugh.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants