Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion core/src/main/scala/org/apache/spark/SparkEnv.scala
Original file line number Diff line number Diff line change
Expand Up @@ -197,10 +197,11 @@ object SparkEnv extends Logging {
numCores: Int,
ioEncryptionKey: Option[Array[Byte]],
isLocal: Boolean): SparkEnv = {
val bindAddress = conf.get(EXECUTOR_BIND_ADDRESS)
val env = create(
conf,
executorId,
hostname,
bindAddress,
hostname,
None,
isLocal,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,11 @@ package object config {
.bytesConf(ByteUnit.MiB)
.createOptional

private[spark] val EXECUTOR_BIND_ADDRESS = ConfigBuilder("spark.executor.bindAddress")
.doc("Address where to bind network listen sockets on the executor.")
.stringConf
.createWithDefault(Utils.localHostName())

private[spark] val MEMORY_OFFHEAP_ENABLED = ConfigBuilder("spark.memory.offHeap.enabled")
.doc("If true, Spark will attempt to use off-heap memory for certain operations. " +
"If off-heap memory use is enabled, then spark.memory.offHeap.size must be positive.")
Expand Down
12 changes: 12 additions & 0 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,18 @@ of the most common options to set are:
This option is currently supported on YARN and Kubernetes.
</td>
</tr>
<tr>
<td><code>spark.executor.bindAddress</code></td>
<td>(local hostname)</td>
<td>
Hostname or IP address where to bind listening sockets. This config overrides the SPARK_LOCAL_IP
environment variable (see below).
<br />It also allows a different address from the local one to be advertised to other
executors or external systems. This is useful, for example, when running containers with bridged networking.
For this to properly work, the different ports used by the driver (RPC, block manager and UI) need to be
forwarded from the container's host.
</td>
</tr>
<tr>
<td><code>spark.extraListeners</code></td>
<td>(none)</td>
Expand Down