Skip to content

Commit d5fe17b

Browse files
committed
Update the YARN docs
1 parent 5c5e7d5 commit d5fe17b

File tree

2 files changed

+19
-8
lines changed

2 files changed

+19
-8
lines changed

docs/hadoop-third-party-distributions.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,11 @@ with these distributions:
99

1010
# Compile-time Hadoop Version
1111

12-
When compiling Spark, you'll need to
13-
[set the SPARK_HADOOP_VERSION flag](index.html#a-note-about-hadoop-versions):
12+
When compiling Spark, you'll need to specify the Hadoop version by [defining the hadoop.version property](building-with-maven.html):
1413

15-
SPARK_HADOOP_VERSION=1.0.4 sbt/sbt assembly
14+
mvn -Dhadoop.version=1.0.4 -DskipTests clean package
1615

17-
The table below lists the corresponding `SPARK_HADOOP_VERSION` code for each CDH/HDP release. Note that
16+
The table below lists the corresponding `hadoop.version` code for each CDH/HDP release. Note that
1817
some Hadoop releases are binary compatible across client versions. This means the pre-built Spark
1918
distribution may "just work" without you needing to compile. That said, we recommend compiling with
2019
the _exact_ Hadoop version you are running to avoid any compatibility errors.
@@ -46,6 +45,10 @@ the _exact_ Hadoop version you are running to avoid any compatibility errors.
4645
</tr>
4746
</table>
4847

48+
In SBT, the equivalent can be achieved by setting the SPARK_HADOOP_VERSION flag:
49+
50+
SPARK_HADOOP_VERSION=1.0.4 sbt/sbt assembly
51+
4952
# Linking Applications to the Hadoop Version
5053

5154
In addition to compiling Spark itself against the right version, you need to add a Maven dependency on that

docs/running-on-yarn.md

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -43,18 +43,19 @@ Unlike in Spark standalone and Mesos mode, in which the master's address is spec
4343

4444
To launch a Spark application in yarn-cluster mode:
4545

46-
./bin/spark-submit --class path.to.your.Class --master yarn-cluster [options] <app jar> [app options]
46+
./bin/spark-submit --class path.to.your.Class --master yarn-cluster --deploy-mode cluster [options] <app jar> [app options]
4747

4848
For example:
4949

5050
$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
5151
--master yarn-cluster \
52+
--deploy-mode cluster \
5253
--num-executors 3 \
5354
--driver-memory 4g \
5455
--executor-memory 2g \
5556
--executor-cores 1
5657
examples/target/scala-{{site.SCALA_BINARY_VERSION}}/spark-examples-assembly-{{site.SPARK_VERSION}}.jar \
57-
yarn-cluster 5
58+
10
5859

5960
The above starts a YARN client program which starts the default Application Master. Then SparkPi will be run as a child thread of Application Master. The client will periodically poll the Application Master for status updates and display them in the console. The client will exit once your application has finished running. Refer to the "Viewing Logs" section below for how to see driver and executor logs.
6061

@@ -68,11 +69,12 @@ In yarn-cluster mode, the driver runs on a different machine than the client, so
6869

6970
$ ./bin/spark-submit --class my.main.Class \
7071
--master yarn-cluster \
72+
--deploy-mode cluster \
7173
--jars my-other-jar.jar,my-other-other-jar.jar
7274
my-main-jar.jar
73-
yarn-cluster 5
75+
[app arguments]
7476

75-
# Viewing logs
77+
# Debugging your Application
7678

7779
In YARN terminology, executors and application masters run inside "containers". YARN has two modes for handling container logs after an application has completed. If log aggregation is turned on (with the yarn.log-aggregation-enable config), container logs are copied to HDFS and deleted on the local machine. These logs can be viewed from anywhere on the cluster with the "yarn logs" command.
7880

@@ -82,6 +84,12 @@ will print out the contents of all log files from all containers from the given
8284

8385
When log aggregation isn't turned on, logs are retained locally on each machine under YARN_APP_LOGS_DIR, which is usually configured to /tmp/logs or $HADOOP_HOME/logs/userlogs depending on the Hadoop version and installation. Viewing logs for a container requires going to the host that contains them and looking in this directory. Subdirectories organize log files by application ID and container ID.
8486

87+
To review per container launch environment, increase yarn.nodemanager.delete.debug-delay-sec to a
88+
large value (e.g. 36000), and then access the application cache through yarn.nodemanager.local-dirs
89+
on the nodes on which containers are launched. This directory contains the launch script, jars, and
90+
all environment variables used for launching each container. This process is useful for debugging
91+
classpath problems in particular.
92+
8593
# Important notes
8694

8795
- Before Hadoop 2.2, YARN does not support cores in container resource requests. Thus, when running against an earlier version, the numbers of cores given via command line arguments cannot be passed to YARN. Whether core requests are honored in scheduling decisions depends on which scheduler is in use and how it is configured.

0 commit comments

Comments
 (0)