Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -302,8 +302,8 @@ package object config {
.createWithDefaultString("1g")

private[spark] val EXECUTOR_MEMORY_OVERHEAD = ConfigBuilder("spark.executor.memoryOverhead")
.doc("The amount of non-heap memory to be allocated per executor in cluster mode, " +
"in MiB unless otherwise specified.")
.doc("The amount of non-heap memory to be allocated per executor, in MiB unless otherwise" +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change means that the non-cluster mode refers to this value?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, the 'client' deploy mode respects this too

" specified.")
.version("2.3.0")
.bytesConf(ByteUnit.MiB)
.createOptional
Expand Down
7 changes: 3 additions & 4 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -274,10 +274,9 @@ of the most common options to set are:
<td><code>spark.executor.memoryOverhead</code></td>
<td>executorMemory * 0.10, with minimum of 384 </td>
<td>
Amount of additional memory to be allocated per executor process in cluster mode, in MiB unless
otherwise specified. This is memory that accounts for things like VM overheads, interned strings,
other native overheads, etc. This tends to grow with the executor size (typically 6-10%).
This option is currently supported on YARN and Kubernetes.
Amount of additional memory to be allocated per executor process, in MiB unless otherwise specified.
This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc.
This tends to grow with the executor size (typically 6-10%). This option is currently supported on YARN and Kubernetes.
<br/>
<em>Note:</em> Additional memory includes PySpark executor memory
(when <code>spark.executor.pyspark.memory</code> is not configured) and memory used by other
Expand Down