Skip to content

Commit 14054ff

Browse files
Sital KediaMarcelo Vanzin
authored andcommitted
[SPARK-21834] Incorrect executor request in case of dynamic allocation
## What changes were proposed in this pull request? killExecutor api currently does not allow killing an executor without updating the total number of executors needed. In case of dynamic allocation is turned on and the allocator tries to kill an executor, the scheduler reduces the total number of executors needed ( see https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala#L635) which is incorrect because the allocator already takes care of setting the required number of executors itself. ## How was this patch tested? Ran a job on the cluster and made sure the executor request is correct Author: Sital Kedia <[email protected]> Closes #19081 from sitalkedia/skedia/oss_fix_executor_allocation. (cherry picked from commit 6949a9c) Signed-off-by: Marcelo Vanzin <[email protected]>
1 parent d10c9dc commit 14054ff

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -430,6 +430,9 @@ private[spark] class ExecutorAllocationManager(
430430
} else {
431431
client.killExecutors(executorIdsToBeRemoved)
432432
}
433+
// [SPARK-21834] killExecutors api reduces the target number of executors.
434+
// So we need to update the target with desired value.
435+
client.requestTotalExecutors(numExecutorsTarget, localityAwareTasks, hostToLocalTaskCount)
433436
// reset the newExecutorTotal to the existing number of executors
434437
newExecutorTotal = numExistingExecutors
435438
if (testing || executorsRemoved.nonEmpty) {

0 commit comments

Comments
 (0)