Skip to content

Commit be46d1f

Browse files
committed
Address feedback from Marcelo and Patrick
1 parent 867a3ea commit be46d1f

File tree

2 files changed

+9
-29
lines changed

2 files changed

+9
-29
lines changed

docs/cluster-overview.md

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -118,17 +118,13 @@ If you are ever unclear where configuration options are coming from. fine-graine
118118
information can be printed by adding the `--verbose` option to `./spark-submit`.
119119

120120
### Advanced Dependency Management
121-
When using `./bin/spark-submit` the app jar will be automatically transferred to the cluster. For
122-
many users this is sufficient. However, advanced users can add jars by calling `addFile` or `addJar`
123-
on an existing SparkContext. This can be used to distribute JAR files (Java/Scala) or .egg and
124-
.zip libraries (Python) to executors. Spark uses the following URL scheme to allow different
121+
When using `./bin/spark-submit` the app jar along with any jars included with the `--jars` option
122+
will be automatically transferred to the cluster. `--jars` can also be used to distribute .egg and .zip
123+
libraries for Python to executors. Spark uses the following URL scheme to allow different
125124
strategies for disseminating jars:
126125

127126
- **file:** - Absolute paths and `file:/` URIs are served by the driver's HTTP file server, and
128-
every executor pulls the file from the driver HTTP server. When running the driver in the cluster,
129-
the jars need a way of getting from the client to the driver so that it can host them. This is not
130-
currently supported with Spark standalone, and on YARN this requires passing additional jars on the
131-
command line with the `--jars` option.
127+
every executor pulls the file from the driver HTTP server.
132128
- **hdfs:**, **http:**, **https:**, **ftp:** - these pull down files and JARs from the URI as expected
133129
- **local:** - a URI starting with local:/ is expected to exist as a local file on each worker node. This
134130
means that no network IO will be incurred, and works well for large files/JARs that are pushed to each worker,

docs/running-on-yarn.md

Lines changed: 5 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -5,29 +5,11 @@ title: Launching Spark on YARN
55

66
Support for running on [YARN (Hadoop
77
NextGen)](http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/YARN.html)
8-
was added to Spark in version 0.6.0, and improved in 0.7.0 and 0.8.0.
9-
10-
# Building a YARN-Enabled Assembly JAR
11-
12-
We need a consolidated Spark JAR (which bundles all the required dependencies) to run Spark jobs on a YARN cluster.
13-
This can be built by setting the Hadoop version and `SPARK_YARN` environment variable, as follows:
14-
15-
mvn package -Pyarn -Dyarn.version=2.2.0 -Dhadoop.version=2.2.0 -DskipTests
16-
17-
The assembled JAR will be something like this:
18-
`./assembly/target/scala-{{site.SCALA_BINARY_VERSION}}/spark-assembly_{{site.SPARK_VERSION}}-hadoop2.2.0.jar`.
19-
20-
The build process also supports YARN versions older than 2.2.0 (e.g. 0.23.x).
21-
22-
mvn package -Pyarn-alpha -Dyarn.version=0.23.7 -Dhadoop.version=0.23.7 -DskipTests
8+
was added to Spark in version 0.6.0, and improved in subsequent releases.
239

2410
# Preparations
2511

26-
- Building a YARN-enabled assembly (see above).
27-
- The assembled jar can be installed into HDFS or used locally.
28-
- Your application code must be packaged into a separate JAR file.
29-
30-
If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_{{site.SCALA_BINARY_VERSION}}-{{site.SPARK_VERSION}}` file can be generated by running `sbt/sbt assembly`. NOTE: since the documentation you're reading is for Spark version {{site.SPARK_VERSION}}, we are assuming here that you have downloaded Spark {{site.SPARK_VERSION}} or checked it out of source control. If you are using a different version of Spark, the version numbers in the jar generated by the sbt package command will obviously be different.
12+
We need a consolidated Spark JAR (which bundles all the required dependencies) to run Spark jobs on a YARN cluster. The jar must be built with options that enable YARN support. To build this jar yourself, refer to the [building with maven guide](building-with-maven.html).
3113

3214
# Configuration
3315

@@ -46,10 +28,12 @@ System Properties:
4628
* `spark.yarn.max.executor.failures`, the maximum number of executor failures before failing the application. Default is the number of executors requested times 2 with minimum of 3.
4729
* `spark.yarn.historyServer.address`, the address of the Spark history server (i.e. host.com:18080). The address should not contain a scheme (http://). Defaults to not being set since the history server is an optional service. This address is given to the Yarn ResourceManager when the Spark application finishes to link the application from the ResourceManager UI to the Spark history server UI.
4830

31+
By default, Spark on YARN will use a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. To point to a jar on HDFS, export SPARK_JAR=hdfs:/some/path.
32+
4933
# Launching Spark on YARN
5034

5135
Ensure that HADOOP_CONF_DIR or YARN_CONF_DIR points to the directory which contains the (client side) configuration files for the Hadoop cluster.
52-
These configs are used to write to the dfs, and connect to the YARN ResourceManager.
36+
These configs are used to write to the dfs and connect to the YARN ResourceManager.
5337

5438
There are two deploy modes that can be used to launch Spark applications on YARN. In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application. In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.
5539

0 commit comments

Comments
 (0)