Skip to content

Commit 313f202

Browse files
aarondavpwendell
authored andcommitted
SPARK-2282: Reuse PySpark Accumulator sockets to avoid crashing Spark
JIRA: https://issues.apache.org/jira/browse/SPARK-2282 This issue is caused by a buildup of sockets in the TIME_WAIT stage of TCP, which is a stage that lasts for some period of time after the communication closes. This solution simply allows us to reuse sockets that are in TIME_WAIT, to avoid issues with the buildup of the rapid creation of these sockets. Author: Aaron Davidson <[email protected]> Closes #1220 from aarondav/SPARK-2282 and squashes the following commits: 2e5cab3 [Aaron Davidson] SPARK-2282: Reuse PySpark Accumulator sockets to avoid crashing Spark (cherry picked from commit 97a0bfe) Signed-off-by: Patrick Wendell <[email protected]>
1 parent cf1d46e commit 313f202

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -402,6 +402,8 @@ private class PythonAccumulatorParam(@transient serverHost: String, serverPort:
402402
} else {
403403
// This happens on the master, where we pass the updates to Python through a socket
404404
val socket = new Socket(serverHost, serverPort)
405+
// SPARK-2282: Immediately reuse closed sockets because we create one per task.
406+
socket.setReuseAddress(true)
405407
val in = socket.getInputStream
406408
val out = new DataOutputStream(new BufferedOutputStream(socket.getOutputStream, bufferSize))
407409
out.writeInt(val2.size)

0 commit comments

Comments
 (0)