Skip to content

Commit 25c01c5

Browse files
jerryshaotdas
authored andcommitted
[STREAMING] [MINOR] Close files correctly when iterator is finished in streaming WAL recovery
Currently there's no chance to close the file correctly after the iteration is finished, change to `CompletionIterator` to avoid resource leakage. Author: jerryshao <[email protected]> Closes apache#6050 from jerryshao/close-file-correctly and squashes the following commits: 52dfaf5 [jerryshao] Close files correctly when iterator is finished
1 parent 8e67433 commit 25c01c5

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

streaming/src/main/scala/org/apache/spark/streaming/util/FileBasedWriteAheadLog.scala

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ import scala.language.postfixOps
2626
import org.apache.hadoop.conf.Configuration
2727
import org.apache.hadoop.fs.Path
2828

29-
import org.apache.spark.util.ThreadUtils
29+
import org.apache.spark.util.{CompletionIterator, ThreadUtils}
3030
import org.apache.spark.{Logging, SparkConf}
3131

3232
/**
@@ -124,7 +124,8 @@ private[streaming] class FileBasedWriteAheadLog(
124124

125125
logFilesToRead.iterator.map { file =>
126126
logDebug(s"Creating log reader with $file")
127-
new FileBasedWriteAheadLogReader(file, hadoopConf)
127+
val reader = new FileBasedWriteAheadLogReader(file, hadoopConf)
128+
CompletionIterator[ByteBuffer, Iterator[ByteBuffer]](reader, reader.close _)
128129
} flatMap { x => x }
129130
}
130131

0 commit comments

Comments
 (0)