Skip to content

Commit f776bc9

Browse files
woshilaiceshidemateiz
authored andcommitted
[CORE] SPARK-2640: In "local[N]", free cores of the only executor should be touched by "spark.task.cpus" for every finish/start-up of tasks.
Make spark's "local[N]" better. In our company, we use "local[N]" in production. It works exellentlly. It's our best choice. Author: woshilaiceshide <[email protected]> Closes #1544 from woshilaiceshide/localX and squashes the following commits: 6c85154 [woshilaiceshide] [CORE] SPARK-2640: In "local[N]", free cores of the only executor should be touched by "spark.task.cpus" for every finish/start-up of tasks.
1 parent 2592111 commit f776bc9

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

core/src/main/scala/org/apache/spark/scheduler/local/LocalBackend.scala

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ private[spark] class LocalActor(
5757
case StatusUpdate(taskId, state, serializedData) =>
5858
scheduler.statusUpdate(taskId, state, serializedData)
5959
if (TaskState.isFinished(state)) {
60-
freeCores += 1
60+
freeCores += scheduler.CPUS_PER_TASK
6161
reviveOffers()
6262
}
6363

@@ -68,7 +68,7 @@ private[spark] class LocalActor(
6868
def reviveOffers() {
6969
val offers = Seq(new WorkerOffer(localExecutorId, localExecutorHostname, freeCores))
7070
for (task <- scheduler.resourceOffers(offers).flatten) {
71-
freeCores -= 1
71+
freeCores -= scheduler.CPUS_PER_TASK
7272
executor.launchTask(executorBackend, task.taskId, task.name, task.serializedTask)
7373
}
7474
}

0 commit comments

Comments
 (0)