-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-28340][CORE] Noisy exceptions when tasks are killed: "DiskBloc… #25674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this second part meaningful? Maybe just ClosedByInterruptException while reverting partial writes to file" + file (i.e. keep the file name)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the interpolation. Should you name the exception here for clarity? In both cases you could log the exception message in the string here instead, to preserve some detail.
It'd be nice to avoid repeating the results.put(...) ; is this worth it?
case e: Exception =>
e match {
case ce: CloseByInterruptException => logError(...)
case ex: Exception => logError(...)
}
results.put(...)
return
…kObjectWriter: Uncaught exception while reverting partial writes to file: java.nio.channels.ClosedByInterruptException"
d4a14c6 to
6dac661
Compare
|
@srowen thanks for your comments, the pr is updated. |
|
@jerryshao do you have any comment? |
|
ok to test. |
|
Test build #110234 has finished for PR 25674 at commit
|
|
Merged to master |
1 similar comment
|
Merged to master |
### What changes were proposed in this pull request? If a Spark task is killed due to intentional job kills, automated killing of redundant speculative tasks, etc, ClosedByInterruptException occurs if task has unfinished I/O operation with AbstractInterruptibleChannel. A single cancelled task can result in hundreds of stack trace of ClosedByInterruptException being logged. In this PR, stack trace of ClosedByInterruptException won't be logged like Executor.run do for InterruptedException. ### Why are the changes needed? Large numbers of spurious exceptions is confusing to users when they are inspecting Spark logs to diagnose other issues. ### Does this PR introduce any user-facing change? No ### How was this patch tested? N/A Closes apache#25674 from colinmjj/spark-28340. Authored-by: colinma <[email protected]> Signed-off-by: Sean Owen <[email protected]>
What changes were proposed in this pull request?
If a Spark task is killed due to intentional job kills, automated killing of redundant speculative tasks, etc, ClosedByInterruptException occurs if task has unfinished I/O operation with AbstractInterruptibleChannel. A single cancelled task can result in hundreds of stack trace of ClosedByInterruptException being logged.
In this PR, stack trace of ClosedByInterruptException won't be logged like Executor.run do for InterruptedException.
Why are the changes needed?
Large numbers of spurious exceptions is confusing to users when they are inspecting Spark logs to diagnose other issues.
Does this PR introduce any user-facing change?
No
How was this patch tested?
N/A