-
Notifications
You must be signed in to change notification settings - Fork 117
Allow custom executor labels and annotations #321
Conversation
docs/running-on-kubernetes.md
Outdated
| <td><code>spark.kubernetes.executor.labels</code></td> | ||
| <td>(none)</td> | ||
| <td> | ||
| Custom labels that will be added to the executor pods. This should be a comma-separated list of label key-value pairs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do you put a label where the value contains an =? Or a comma , ?
key1=value1,key2=value2=with-equals,key3=value3 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We haven't considered that in any of the cases where we use this format. Unfortunately I think that will have to be disallowed, but let me know if you can think of anything better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've been using this format for driver labels and annotations already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that there's precedent I'm fine with not supporting it. We probably will need to support values containing = and , at some point in the future though.
Could that be added with \ escaping or would it take a larger change in the format of spark.kubernetes.executor.annotations values?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One possibility is to support any conf of the following format:
spark.kubernetes.driver.label.myLabel
where the value of this configuration is the value of the label.
Then in SparkConf we look for all configurations with the prefix spark.kubernetes.driver.label and construct the labels map accordingly. I think in such a scheme we can support any string without escaping.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And then we'd support both spark.kubernetes.driver.labels and spark.kubernetes.driver.label.myLabel formats simultaneously? How to handle conflicts of double-specified labels?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I propose above would be a breaking change so that we don't have to support both at once. But we can also deduplicate fairly easily and throw an exception.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok that feels like a good path forward without completely compromising backcompat if we wanted. Let's punt that to later. #322
| <tr> | ||
| <td><code>spark.kubernetes.executor.annotations</code></td> | ||
| <td>(none)</td> | ||
| <td> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
list of annotation key-value pairs?
| s"Custom executor labels cannot contain $SPARK_APP_ID_LABEL as it is reserved for Spark.") | ||
| require( | ||
| !executorLabels.contains(SPARK_EXECUTOR_ID_LABEL), | ||
| s"Custom executor labels cannot contain $SPARK_EXECUTOR_ID_LABEL as it is reserved for Spark.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indentation doesn't match the require above
| org.apache.spark.internal.config.EXECUTOR_CLASS_PATH) | ||
| private val executorJarsDownloadDir = conf.get(INIT_CONTAINER_JARS_DOWNLOAD_LOCATION) | ||
|
|
||
| private val executorLabels = ConfigurationUtils.parseKeyValuePairs( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
separate issue, but do we have two versions of parseKeyValuePairs ? Also one in org.apache.spark.deploy.kubernetes.submit.Client
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah Client should be using the utility class.
…park-on-k8s/spark into executor-annotations-and-labels
|
@erikerlandson quick sign off and then we can merge? |
|
LGTM after #321 (comment) is addressed. I'm convinced about the labels, but quick question on the annotations - do you have a use-case driving the setting of custom annotations? |
|
The comments were addressed in the latest push. Regarding the need for custom annotations - we have some custom controllers that watch the API server for pods with given annotations, and perform actions to manage those pods accordingly. |
* Allow custom executor labels and annotations * Address comments. * Fix scalastyle.
* Allow custom executor labels and annotations * Address comments. * Fix scalastyle.
No description provided.