diff --git a/core/src/main/scala/org/apache/spark/internal/config/History.scala b/core/src/main/scala/org/apache/spark/internal/config/History.scala index 14fb5ff075472..8f99908507ceb 100644 --- a/core/src/main/scala/org/apache/spark/internal/config/History.scala +++ b/core/src/main/scala/org/apache/spark/internal/config/History.scala @@ -162,7 +162,7 @@ private[spark] object History { val APPLY_CUSTOM_EXECUTOR_LOG_URL_TO_INCOMPLETE_APP = ConfigBuilder("spark.history.custom.executor.log.url.applyIncompleteApplication") .doc("Whether to apply custom executor log url, as specified by " + - "`spark.history.custom.executor.log.url`, to incomplete application as well. " + + s"${CUSTOM_EXECUTOR_LOG_URL.key}, to incomplete application as well. " + "Even if this is true, this still only affects the behavior of the history server, " + "not running spark applications.") .booleanConf diff --git a/docs/monitoring.md b/docs/monitoring.md index fc95f6a8d3191..3061c6761cdcf 100644 --- a/docs/monitoring.md +++ b/docs/monitoring.md @@ -159,23 +159,21 @@ Security options for the Spark History Server are covered more detail in the false Indicates whether the history server should use kerberos to login. This is required - if the history server is accessing HDFS files on a secure Hadoop cluster. If this is - true, it uses the configs spark.history.kerberos.principal and - spark.history.kerberos.keytab. + if the history server is accessing HDFS files on a secure Hadoop cluster. spark.history.kerberos.principal (none) - Kerberos principal name for the History Server. + When spark.history.kerberos.enabled=true, specifies kerberos principal name for the History Server. spark.history.kerberos.keytab (none) - Location of the kerberos keytab file for the History Server. + When spark.history.kerberos.enabled=true, specifies location of the kerberos keytab file for the History Server. @@ -189,7 +187,7 @@ Security options for the Spark History Server are covered more detail in the spark.history.fs.cleaner.interval 1d - How often the filesystem job history cleaner checks for files to delete. + When spark.history.fs.cleaner.enabled=true, specifies how often the filesystem job history cleaner checks for files to delete. Files are deleted if at least one of two conditions holds. First, they're deleted if they're older than spark.history.fs.cleaner.maxAge. They are also deleted if the number of files is more than @@ -201,14 +199,14 @@ Security options for the Spark History Server are covered more detail in the spark.history.fs.cleaner.maxAge 7d - Job history files older than this will be deleted when the filesystem history cleaner runs. + When spark.history.fs.cleaner.enabled=true, job history files older than this will be deleted when the filesystem history cleaner runs. spark.history.fs.cleaner.maxNum Int.MaxValue - The maximum number of files in the event log directory. + When spark.history.fs.cleaner.enabled=true, specifies the maximum number of files in the event log directory. Spark tries to clean up the completed attempt logs to maintain the log directory under this limit. This should be smaller than the underlying file system limit like `dfs.namenode.fs-limits.max-directory-items` in HDFS. @@ -242,7 +240,7 @@ Security options for the Spark History Server are covered more detail in the spark.history.fs.driverlog.cleaner.interval spark.history.fs.cleaner.interval - How often the filesystem driver log cleaner checks for files to delete. + When spark.history.fs.driverlog.cleaner.enabled=true, specifies how often the filesystem driver log cleaner checks for files to delete. Files are only deleted if they are older than spark.history.fs.driverlog.cleaner.maxAge @@ -250,7 +248,7 @@ Security options for the Spark History Server are covered more detail in the spark.history.fs.driverlog.cleaner.maxAge spark.history.fs.cleaner.maxAge - Driver log files older than this will be deleted when the driver log cleaner runs. + When spark.history.fs.driverlog.cleaner.enabled=true, driver log files older than this will be deleted when the driver log cleaner runs.