File tree Expand file tree Collapse file tree 1 file changed +5
-4
lines changed
core/src/main/scala/org/apache/spark/scheduler Expand file tree Collapse file tree 1 file changed +5
-4
lines changed Original file line number Diff line number Diff line change @@ -287,10 +287,11 @@ private[spark] class TaskSchedulerImpl(
287287 }
288288
289289 /**
290- * SPARK-25250: Whenever any Result Task gets successfully completed, we simply mark the
291- * corresponding partition id as completed in all attempts for that particular stage. As a
292- * result, we do not see any Killed tasks due to TaskCommitDenied Exceptions showing up in the UI.
293- */
290+ * SPARK-25250: Whenever any Result Task gets successfully completed, we simply mark the
291+ * corresponding partition id as completed in all attempts for that particular stage. As a
292+ * result, we do not see any Killed tasks due to TaskCommitDenied Exceptions showing up
293+ * in the UI.
294+ */
294295 override def markPartitionIdAsCompletedAndKillCorrespondingTaskAttempts (
295296 partitionId : Int , stageId : Int ): Unit = {
296297 taskSetsByStageIdAndAttempt.getOrElse(stageId, Map ()).values.foreach { tsm =>
You can’t perform that action at this time.
0 commit comments