Skip to content

Commit 80b49be

Browse files
jasonmoore2ksrowen
authored andcommitted
[SPARK-14915][CORE] Don't re-queue a task if another attempt has already succeeded
## What changes were proposed in this pull request? Don't re-queue a task if another attempt has already succeeded. This currently happens when a speculative task is denied from committing the result due to another copy of the task already having succeeded. ## How was this patch tested? I'm running a job which has a fair bit of skew in the processing time across the tasks for speculation to trigger in the last quarter (default settings), causing many commit denied exceptions to be thrown. Previously, these tasks were then being retried over and over again until the stage possibly completes (despite using compute resources on these superfluous tasks). With this change (applied to the 1.6 branch), they no longer retry and the stage completes successfully without these extra task attempts. Author: Jason Moore <[email protected]> Closes #12751 from jasonmoore2k/SPARK-14915. (cherry picked from commit 77361a4) Signed-off-by: Sean Owen <[email protected]>
1 parent 666eb01 commit 80b49be

File tree

1 file changed

+10
-1
lines changed

1 file changed

+10
-1
lines changed

core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -716,7 +716,16 @@ private[spark] class TaskSetManager(
716716
failedExecutors.getOrElseUpdate(index, new HashMap[String, Long]()).
717717
put(info.executorId, clock.getTimeMillis())
718718
sched.dagScheduler.taskEnded(tasks(index), reason, null, accumUpdates, info)
719-
addPendingTask(index)
719+
720+
if (successful(index)) {
721+
logInfo(
722+
s"Task ${info.id} in stage ${taskSet.id} (TID $tid) failed, " +
723+
"but another instance of the task has already succeeded, " +
724+
"so not re-queuing the task to be re-executed.")
725+
} else {
726+
addPendingTask(index)
727+
}
728+
720729
if (!isZombie && state != TaskState.KILLED
721730
&& reason.isInstanceOf[TaskFailedReason]
722731
&& reason.asInstanceOf[TaskFailedReason].countTowardsTaskFailures) {

0 commit comments

Comments
 (0)