Skip to content

Commit 4335af0

Browse files
yaooqinnmaropu
authored andcommitted
[MINOR][DOC] spark.executor.memoryOverhead is not cluster-mode only
### What changes were proposed in this pull request? Remove "in cluster mode" from the description of `spark.executor.memoryOverhead` ### Why are the changes needed? fix correctness issue in documentaion ### Does this PR introduce _any_ user-facing change? yes, users may not get confused about the description `spark.executor.memoryOverhead` ### How was this patch tested? pass GA doc generation Closes #30311 from yaooqinn/minordoc. Authored-by: Kent Yao <[email protected]> Signed-off-by: Takeshi Yamamuro <[email protected]>
1 parent 6d31dae commit 4335af0

File tree

2 files changed

+5
-6
lines changed

2 files changed

+5
-6
lines changed

core/src/main/scala/org/apache/spark/internal/config/package.scala

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -302,8 +302,8 @@ package object config {
302302
.createWithDefaultString("1g")
303303

304304
private[spark] val EXECUTOR_MEMORY_OVERHEAD = ConfigBuilder("spark.executor.memoryOverhead")
305-
.doc("The amount of non-heap memory to be allocated per executor in cluster mode, " +
306-
"in MiB unless otherwise specified.")
305+
.doc("The amount of non-heap memory to be allocated per executor, in MiB unless otherwise" +
306+
" specified.")
307307
.version("2.3.0")
308308
.bytesConf(ByteUnit.MiB)
309309
.createOptional

docs/configuration.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -274,10 +274,9 @@ of the most common options to set are:
274274
<td><code>spark.executor.memoryOverhead</code></td>
275275
<td>executorMemory * 0.10, with minimum of 384 </td>
276276
<td>
277-
Amount of additional memory to be allocated per executor process in cluster mode, in MiB unless
278-
otherwise specified. This is memory that accounts for things like VM overheads, interned strings,
279-
other native overheads, etc. This tends to grow with the executor size (typically 6-10%).
280-
This option is currently supported on YARN and Kubernetes.
277+
Amount of additional memory to be allocated per executor process, in MiB unless otherwise specified.
278+
This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc.
279+
This tends to grow with the executor size (typically 6-10%). This option is currently supported on YARN and Kubernetes.
281280
<br/>
282281
<em>Note:</em> Additional memory includes PySpark executor memory
283282
(when <code>spark.executor.pyspark.memory</code> is not configured) and memory used by other

0 commit comments

Comments
 (0)