You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The above starts a YARN client program which starts the default Application Master. Then SparkPi will be run as a child thread of Application Master. The client will periodically poll the Application Master for status updates and display them in the console. The client will exit once your application has finished running. Refer to the "Viewing Logs" section below for how to see driver and executor logs.
60
61
@@ -68,11 +69,12 @@ In yarn-cluster mode, the driver runs on a different machine than the client, so
68
69
69
70
$ ./bin/spark-submit --class my.main.Class \
70
71
--master yarn-cluster \
72
+
--deploy-mode cluster \
71
73
--jars my-other-jar.jar,my-other-other-jar.jar
72
74
my-main-jar.jar
73
-
yarn-cluster 5
75
+
[app arguments]
74
76
75
-
# Viewing logs
77
+
# Debugging your Application
76
78
77
79
In YARN terminology, executors and application masters run inside "containers". YARN has two modes for handling container logs after an application has completed. If log aggregation is turned on (with the yarn.log-aggregation-enable config), container logs are copied to HDFS and deleted on the local machine. These logs can be viewed from anywhere on the cluster with the "yarn logs" command.
78
80
@@ -82,6 +84,12 @@ will print out the contents of all log files from all containers from the given
82
84
83
85
When log aggregation isn't turned on, logs are retained locally on each machine under YARN_APP_LOGS_DIR, which is usually configured to /tmp/logs or $HADOOP_HOME/logs/userlogs depending on the Hadoop version and installation. Viewing logs for a container requires going to the host that contains them and looking in this directory. Subdirectories organize log files by application ID and container ID.
84
86
87
+
To review per container launch environment, increase yarn.nodemanager.delete.debug-delay-sec to a
88
+
large value (e.g. 36000), and then access the application cache through yarn.nodemanager.local-dirs
89
+
on the nodes on which containers are launched. This directory contains the launch script, jars, and
90
+
all environment variables used for launching each container. This process is useful for debugging
91
+
classpath problems in particular.
92
+
85
93
# Important notes
86
94
87
95
- Before Hadoop 2.2, YARN does not support cores in container resource requests. Thus, when running against an earlier version, the numbers of cores given via command line arguments cannot be passed to YARN. Whether core requests are honored in scheduling decisions depends on which scheduler is in use and how it is configured.
0 commit comments