Skip to content

Commit bf04a39

Browse files
andrewor14rxin
authored andcommitted
[SPARK-2392] Executors should not start their own HTTP servers
Executors currently start their own unused HTTP file servers. This is because we use the same SparkEnv class for both executors and drivers, and we do not distinguish this case. In the longer term, we should separate out SparkEnv for the driver and SparkEnv for the executors. Author: Andrew Or <[email protected]> Closes apache#1335 from andrewor14/executor-http-server and squashes the following commits: 46ef263 [Andrew Or] Start HTTP server only on the driver
1 parent e6f7bfc commit bf04a39

File tree

1 file changed

+10
-4
lines changed

1 file changed

+10
-4
lines changed

core/src/main/scala/org/apache/spark/SparkEnv.scala

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ class SparkEnv (
7979

8080
private[spark] def stop() {
8181
pythonWorkers.foreach { case(key, worker) => worker.stop() }
82-
httpFileServer.stop()
82+
Option(httpFileServer).foreach(_.stop())
8383
mapOutputTracker.stop()
8484
shuffleManager.stop()
8585
broadcastManager.stop()
@@ -228,9 +228,15 @@ object SparkEnv extends Logging {
228228

229229
val cacheManager = new CacheManager(blockManager)
230230

231-
val httpFileServer = new HttpFileServer(securityManager)
232-
httpFileServer.initialize()
233-
conf.set("spark.fileserver.uri", httpFileServer.serverUri)
231+
val httpFileServer =
232+
if (isDriver) {
233+
val server = new HttpFileServer(securityManager)
234+
server.initialize()
235+
conf.set("spark.fileserver.uri", server.serverUri)
236+
server
237+
} else {
238+
null
239+
}
234240

235241
val metricsSystem = if (isDriver) {
236242
MetricsSystem.createMetricsSystem("driver", conf, securityManager)

0 commit comments

Comments
 (0)