You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/running-on-yarn.md
+17-5Lines changed: 17 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,10 +17,6 @@ To build Spark yourself, refer to the [building with Maven guide](building-with-
17
17
18
18
Most of the configs are the same for Spark on YARN as for other deployment modes. See the [configuration page](configuration.html) for more information on those. These are configs that are specific to Spark on YARN.
19
19
20
-
#### Environment Variables
21
-
22
-
*`SPARK_YARN_USER_ENV`, to add environment variables to the Spark processes launched on YARN. This can be a comma separated list of environment variables, e.g. `SPARK_YARN_USER_ENV="JAVA_HOME=/jdk64,FOO=bar"`.
23
-
24
20
#### Spark Properties
25
21
26
22
<tableclass="table">
@@ -110,7 +106,23 @@ Most of the configs are the same for Spark on YARN as for other deployment modes
110
106
<td><code>spark.yarn.access.namenodes</code></td>
111
107
<td>(none)</td>
112
108
<td>
113
-
A list of secure HDFS namenodes your Spark application is going to access. For example, `spark.yarn.access.namenodes=hdfs://nn1.com:8032,hdfs://nn2.com:8032`. The Spark application must have acess to the namenodes listed and Kerberos must be properly configured to be able to access them (either in the same realm or in a trusted realm). Spark acquires security tokens for each of the namenodes so that the Spark application can access those remote HDFS clusters.
109
+
A list of secure HDFS namenodes your Spark application is going to access. For
110
+
example, `spark.yarn.access.namenodes=hdfs://nn1.com:8032,hdfs://nn2.com:8032`.
111
+
The Spark application must have acess to the namenodes listed and Kerberos must
112
+
be properly configured to be able to access them (either in the same realm or in
113
+
a trusted realm). Spark acquires security tokens for each of the namenodes so that
114
+
the Spark application can access those remote HDFS clusters.
0 commit comments