You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/running-on-mesos.md
+14Lines changed: 14 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -216,6 +216,20 @@ node. Please refer to [Hadoop on Mesos](https://github.com/mesos/hadoop).
216
216
217
217
In either case, HDFS runs separately from Hadoop MapReduce, without being scheduled through Mesos.
218
218
219
+
# Dynamic Resource Allocation with Mesos
220
+
221
+
Mesos supports dynamic allocation only with coarse grain mode, which can resize the number of executors based on statistics
222
+
of the application. While dynamic allocation supports both scaling up and scaling down the number of executors, the coarse grain scheduler only supports scaling down
223
+
since it is already designed to run one executor per slave with the configured amount of resources. However, after scaling down the number of executors the coarse grain scheduler
224
+
can scale back up to the same amount of executors when Spark signals more executors are needed.
225
+
226
+
Users that like to utilize this feature should launch the Mesos Shuffle Service that
227
+
provides shuffle data cleanup functionality on top of the Shuffle Service since Mesos doesn't yet support notifying another framework's
228
+
termination. To launch/stop the Mesos Shuffle Service please use the provided sbin/start-mesos-shuffle-service.sh and sbin/stop-mesos-shuffle-service.sh
229
+
scripts accordingly.
230
+
231
+
The Shuffle Service is expected to be running on each slave node that will run Spark executors. One way to easily achieve this with Mesos
232
+
is to launch the Shuffle Service with Marathon with a unique host constraint.
0 commit comments