+| Property Name | Default | Meaning |
+
+ spark.executor.instances |
+ 2 |
+
+ The number of executors for static allocation. With spark.dynamicAllocation.enabled, the initial set of executors will be at least this large.
+ |
+
+
+ spark.nomad.cluster.expectImmediateScheduling |
+ false |
+
+ When true, spark-submit will fail if Nomad isn't able to schedule the job to run right away
+ |
+
+
+ spark.nomad.cluster.monitorUntil |
+ |
+
+ Specifies how long spark-submit should monitor a Spark application in cluster mode. submitted (the default) causes spark-submit to return as soon as the application has been submitted to the Nomad cluster. scheduled causes spark-submit to return once the Nomad job has been scheduled. complete causes spark-submit to tail the output from the driver process and return when the job has completed.
+ |
+
+
+ spark.nomad.datacenters |
+ |
+
+ Comma-separated list of Nomad datacenters to use (defaults to the datacenter of the first Nomad server contacted)
+ |
+
+
+ spark.nomad.docker.email |
+ |
+
+ Email address used when downloading the docker image specified by spark.nomad.dockerImage from the docker registry. (https://www.nomadproject.io/docs/drivers/docker.html#authentication)
+ |
+
+
+ spark.nomad.docker.password |
+ |
+
+ Password used when downloading the docker image specified by spark.nomad.dockerImage from the docker registry. (https://www.nomadproject.io/docs/drivers/docker.html#authentication)
+ |
+
+
+ spark.nomad.docker.serverAddress |
+ |
+
+ Server address (domain/IP without the protocol) used when downloading the docker image specified by spark.nomad.dockerImage from the docker registry. Docker Hub is used by default.(https://www.nomadproject.io/docs/drivers/docker.html#authentication)
+ |
+
+
+ spark.nomad.docker.username |
+ |
+
+ Username used when downloading the docker image specified by spark.nomad.dockerImage from the docker registry. (https://www.nomadproject.io/docs/drivers/docker.html#authentication)
+ |
+
+
+ spark.nomad.dockerImage |
+ |
+
+ A [docker image](https://www.nomadproject.io/docs/drivers/docker.html#image) to use to run Spark with Nomad's docker driver. When not specified, Nomad's exec driver will be used instead.
+ |
+
+
+ spark.nomad.driver.cpu |
+ 1000 |
+
+ How many MHz of CPU power Nomad should reserve for driver tasks
+ |
+
+
+ spark.nomad.driver.logMaxFileSize |
+ 1m |
+
+ Maximum size that Nomad should keep in log files from driver tasks
+ |
+
+
+ spark.nomad.driver.logMaxFiles |
+ 5 |
+
+ Number of log files Nomad should keep from driver tasks
+ |
+
+
+ spark.nomad.driver.networkMBits |
+ 1 |
+
+ The network bandwidth Nomad should allocate to driver tasks during bin packing
+ |
+
+
+ spark.nomad.driver.retryAttempts |
+ 5 |
+
+ The number of times Nomad should retry driver task groups if they fail
+ |
+
+
+ spark.nomad.driver.retryDelay |
+ 15s |
+
+ How long Nomad should wait before retrying driver task groups if they fail
+ |
+
+
+ spark.nomad.driver.retryInterval |
+ 1d |
+
+ Nomad's retry interval for driver task groups
+ |
+
+
+ spark.nomad.executor.cpu |
+ 1000 |
+
+ How many MHz of CPU power Nomad should reserve for executor tasks
+ |
+
+
+ spark.nomad.executor.logMaxFileSize |
+ 1m |
+
+ Maximum size that Nomad should keep in log files from executor tasks
+ |
+
+
+ spark.nomad.executor.logMaxFiles |
+ 5 |
+
+ Number of log files Nomad should keep from executor tasks
+ |
+
+
+ spark.nomad.executor.networkMBits |
+ 1 |
+
+ The network bandwidth Nomad should allocate to executor tasks during bin packing
+ |
+
+
+ spark.nomad.executor.retryAttempts |
+ 5 |
+
+ The number of times Nomad should retry executor task groups if they fail
+ |
+
+
+ spark.nomad.executor.retryDelay |
+ 15s |
+
+ How long Nomad should wait before retrying executor task groups if they fail
+ |
+
+
+ spark.nomad.executor.retryInterval |
+ 1d |
+
+ Nomad's retry interval for executor task groups
+ |
+
+
+ spark.nomad.job |
+ |
+
+ The Nomad job name to use
+ |
+
+
+ spark.nomad.job.template |
+ |
+
+ The path to a JSON file containing a Nomad job to use as a template
+ |
+
+
+ spark.nomad.priority |
+ |
+
+ The priority of the Nomad job that runs the application or its executors
+ |
+
+
+ spark.nomad.region |
+ |
+
+ The Nomad region to use (defaults to the region of the first Nomad server contacted)
+ |
+
+
+ spark.nomad.shuffle.cpu |
+ 1000 |
+
+ How many MHz of CPU power Nomad should reserve for shuffle service tasks
+ |
+
+
+ spark.nomad.shuffle.logMaxFileSize |
+ 1m |
+
+ Maximum size that Nomad should keep in log files from shuffle service tasks
+ |
+
+
+ spark.nomad.shuffle.logMaxFiles |
+ 5 |
+
+ Number of log files Nomad should keep from shuffle service tasks
+ |
+
+
+ spark.nomad.shuffle.memory |
+ 256m |
+
+ The amount of memory that Nomad should allocate for the shuffle service tasks
+ |
+
+
+ spark.nomad.shuffle.networkMBits |
+ 1 |
+
+ The network bandwidth Nomad should allocate to shuffle service tasks during bin packing
+ |
+
+
+ spark.nomad.sparkDistribution |
+ |
+
+ The location of the spark distribution tgz file to use.
+ |
+
+
+ spark.nomad.tls.caCert |
+ |
+
+ Path to a .pem file containing the certificate authority to validate the Nomad server's TLS certificate against
+ |
+
+
+ spark.nomad.tls.cert |
+ |
+
+ Path to a .pem file containing the TLS certificate to present to the Nomad server
+ |
+
+
+ spark.nomad.tls.trustStorePassword |
+ |
+
+ Path to a .pem file containing the private key corresponding to the certificate in spark.nomad.tls.cert
+ |
+
+
diff --git a/docs/security.md b/docs/security.md
index 9eda42888637..7a9b91309d99 100644
--- a/docs/security.md
+++ b/docs/security.md
@@ -6,7 +6,7 @@ title: Security
Spark currently supports authentication via a shared secret. Authentication can be configured to be on via the `spark.authenticate` configuration parameter. This parameter controls whether the Spark communication protocols do authentication using the shared secret. This authentication is a basic handshake to make sure both sides have the same shared secret and are allowed to communicate. If the shared secret is not identical they will not be allowed to communicate. The shared secret is created as follows:
-* For Spark on [YARN](running-on-yarn.html) deployments, configuring `spark.authenticate` to `true` will automatically handle generating and distributing the shared secret. Each application will use a unique shared secret.
+* For Spark on [YARN](running-on-yarn.html) or [Nomad](running-on-nomad.html) deployments, configuring `spark.authenticate` to `true` will automatically handle generating and distributing the shared secret. Each application will use a unique shared secret.
* For other types of Spark deployments, the Spark parameter `spark.authenticate.secret` should be configured on each of the nodes. This secret will be used by all the Master/Workers and applications.
## Web UI
diff --git a/docs/streaming-flume-integration.md b/docs/streaming-flume-integration.md
index a5d36da5b6de..2c6fa71fd2b2 100644
--- a/docs/streaming-flume-integration.md
+++ b/docs/streaming-flume-integration.md
@@ -68,7 +68,7 @@ configuring Flume agents.
Note that the hostname should be the same as the one used by the resource manager in the
- cluster (Mesos, YARN or Spark Standalone), so that resource allocation can match the names and launch
+ cluster (Mesos, YARN, Nomad or Spark Standalone), so that resource allocation can match the names and launch
the receiver in the right machine.
3. **Deploying:** As with any Spark applications, `spark-submit` is used to launch your application. However, the details are slightly different for Scala/Java applications and Python applications.
diff --git a/launcher/src/main/java/org/apache/spark/launcher/AbstractCommandBuilder.java b/launcher/src/main/java/org/apache/spark/launcher/AbstractCommandBuilder.java
index 6c0c3ebcaebf..a02f16094d31 100644
--- a/launcher/src/main/java/org/apache/spark/launcher/AbstractCommandBuilder.java
+++ b/launcher/src/main/java/org/apache/spark/launcher/AbstractCommandBuilder.java
@@ -157,6 +157,7 @@ List