Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,15 @@

package org.apache.spark.executor

import java.io.{IOException, ObjectInputStream}
import java.util.concurrent.ConcurrentHashMap

import scala.collection.mutable.ArrayBuffer

import org.apache.spark.annotation.DeveloperApi
import org.apache.spark.executor.DataReadMethod.DataReadMethod
import org.apache.spark.storage.{BlockId, BlockStatus}
import org.apache.spark.util.Utils

/**
* :: DeveloperApi ::
Expand Down Expand Up @@ -210,10 +214,26 @@ class TaskMetrics extends Serializable {
private[spark] def updateInputMetrics(): Unit = synchronized {
inputMetrics.foreach(_.updateBytesRead())
}

@throws(classOf[IOException])
private def readObject(in: ObjectInputStream): Unit = Utils.tryOrIOException {
in.defaultReadObject()
// Get the hostname from cached data, since hostname is the order of number of nodes in
// cluster, so using cached hostname will decrease the object number and alleviate the GC
// overhead.
_hostname = TaskMetrics.getCachedHostName(_hostname)
}
}

private[spark] object TaskMetrics {
private val hostNameCache = new ConcurrentHashMap[String, String]()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this cache get large? meaning, should it be a weak ref map?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the size of the cache will at most be as large as the cluster size, so weak ref map may not be so necessary from my understanding.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its fine for now. It may be a problem for really long running applications with lots of node churn (executors are dead and wont be needed but still occupy this hashmap). But thats a really far fetched problem.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this seems fine to me for now. I can imagine some pathological scenarios where this map could grow very large, but, as TD said, I think we'd only see this become a serious problem with extreme scale + duration + churn.


def empty: TaskMetrics = new TaskMetrics

def getCachedHostName(host: String): String = {
val canonicalHost = hostNameCache.putIfAbsent(host, host)
if (canonicalHost != null) canonicalHost else host
}
}

/**
Expand Down