You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[SPARK-11809] Switch the default Mesos mode to coarse-grained mode
Based on my conversions with people, I believe the consensus is that the coarse-grained mode is more stable and easier to reason about. It is best to use that as the default rather than the more flaky fine-grained mode.
Author: Reynold Xin <[email protected]>
Closes#9795 from rxin/SPARK-11809.
Copy file name to clipboardExpand all lines: docs/running-on-mesos.md
+17-10Lines changed: 17 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -161,21 +161,15 @@ Note that jars or python files that are passed to spark-submit should be URIs re
161
161
162
162
# Mesos Run Modes
163
163
164
-
Spark can run over Mesos in two modes: "fine-grained" (default) and "coarse-grained".
164
+
Spark can run over Mesos in two modes: "coarse-grained" (default) and "fine-grained".
165
165
166
-
In "fine-grained" mode (default), each Spark task runs as a separate Mesos task. This allows
167
-
multiple instances of Spark (and other frameworks) to share machines at a very fine granularity,
168
-
where each application gets more or fewer machines as it ramps up and down, but it comes with an
169
-
additional overhead in launching each task. This mode may be inappropriate for low-latency
170
-
requirements like interactive queries or serving web requests.
171
-
172
-
The "coarse-grained" mode will instead launch only *one* long-running Spark task on each Mesos
166
+
The "coarse-grained" mode will launch only *one* long-running Spark task on each Mesos
173
167
machine, and dynamically schedule its own "mini-tasks" within it. The benefit is much lower startup
174
168
overhead, but at the cost of reserving the Mesos resources for the complete duration of the
175
169
application.
176
170
177
-
To run in coarse-grained mode, set the `spark.mesos.coarse` property in your
178
-
[SparkConf](configuration.html#spark-properties):
171
+
Coarse-grained is the default mode. You can also set `spark.mesos.coarse` property to true
172
+
to turn it on explictly in [SparkConf](configuration.html#spark-properties):
179
173
180
174
{% highlight scala %}
181
175
conf.set("spark.mesos.coarse", "true")
@@ -186,6 +180,19 @@ acquire. By default, it will acquire *all* cores in the cluster (that get offere
186
180
only makes sense if you run just one application at a time. You can cap the maximum number of cores
187
181
using `conf.set("spark.cores.max", "10")` (for example).
188
182
183
+
In "fine-grained" mode, each Spark task runs as a separate Mesos task. This allows
184
+
multiple instances of Spark (and other frameworks) to share machines at a very fine granularity,
185
+
where each application gets more or fewer machines as it ramps up and down, but it comes with an
186
+
additional overhead in launching each task. This mode may be inappropriate for low-latency
187
+
requirements like interactive queries or serving web requests.
188
+
189
+
To run in coarse-grained mode, set the `spark.mesos.coarse` property to false in your
190
+
[SparkConf](configuration.html#spark-properties):
191
+
192
+
{% highlight scala %}
193
+
conf.set("spark.mesos.coarse", "false")
194
+
{% endhighlight %}
195
+
189
196
You may also make use of `spark.mesos.constraints` to set attribute based constraints on mesos resource offers. By default, all resource offers will be accepted.
0 commit comments