Skip to content

Commit f2ea8f9

Browse files
tnachenCodingCat
authored andcommitted
[SPARK-9575] [MESOS] Add docuemntation around Mesos shuffle service.
andrewor14 Author: Timothy Chen <[email protected]> Closes apache#7907 from tnachen/mesos_shuffle.
1 parent cbc29c7 commit f2ea8f9

File tree

1 file changed

+14
-0
lines changed

1 file changed

+14
-0
lines changed

docs/running-on-mesos.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -216,6 +216,20 @@ node. Please refer to [Hadoop on Mesos](https://github.com/mesos/hadoop).
216216

217217
In either case, HDFS runs separately from Hadoop MapReduce, without being scheduled through Mesos.
218218

219+
# Dynamic Resource Allocation with Mesos
220+
221+
Mesos supports dynamic allocation only with coarse grain mode, which can resize the number of executors based on statistics
222+
of the application. While dynamic allocation supports both scaling up and scaling down the number of executors, the coarse grain scheduler only supports scaling down
223+
since it is already designed to run one executor per slave with the configured amount of resources. However, after scaling down the number of executors the coarse grain scheduler
224+
can scale back up to the same amount of executors when Spark signals more executors are needed.
225+
226+
Users that like to utilize this feature should launch the Mesos Shuffle Service that
227+
provides shuffle data cleanup functionality on top of the Shuffle Service since Mesos doesn't yet support notifying another framework's
228+
termination. To launch/stop the Mesos Shuffle Service please use the provided sbin/start-mesos-shuffle-service.sh and sbin/stop-mesos-shuffle-service.sh
229+
scripts accordingly.
230+
231+
The Shuffle Service is expected to be running on each slave node that will run Spark executors. One way to easily achieve this with Mesos
232+
is to launch the Shuffle Service with Marathon with a unique host constraint.
219233

220234
# Configuration
221235

0 commit comments

Comments
 (0)