Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 15 additions & 1 deletion docs/running-on-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,10 @@ cluster, you may setup a test cluster on your local machine using
* You must have appropriate permissions to create and list [pods](https://kubernetes.io/docs/user-guide/pods/),
[ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configmap/) and
[secrets](https://kubernetes.io/docs/concepts/configuration/secret/) in your cluster. You can verify that
you can list these resources by running `kubectl get pods` `kubectl get configmap`, and `kubectl get secrets` which
you can list these resources by running `kubectl get pods`, `kubectl get configmap`, and `kubectl get secrets` which
should give you a list of pods and configmaps (if any) respectively.
* The service account or credentials used by the driver pods must have appropriate permissions
as well for editing pod spec.
* You must have a spark distribution with Kubernetes support. This may be obtained from the
[release tarball](https://github.com/apache-spark-on-k8s/spark/releases) or by
[building Spark with Kubernetes support](../resource-managers/kubernetes/README.md#building-spark-with-kubernetes-support).
Expand Down Expand Up @@ -107,6 +109,18 @@ Finally, notice that in the above example we specify a jar with a specific URI w
the location of the example jar that is already in the Docker image. Using dependencies that are on your machine's local
disk is discussed below.

When Kubernetes [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) is enabled,
the `default` service account used by the driver may not have appropriate pod `edit` permissions
for launching executor pods. We recommend to add another service account, say `spark`, with
the necessary privilege. For example:

kubectl create serviceaccount spark
kubectl create clusterrolebinding spark-edit --clusterrole edit \
--serviceaccount default:spark --namespace default

With this, one can add `--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark` to
the spark-submit command line above to specify the service account to use.

## Dependency Management

Application dependencies that are being submitted from your machine need to be sent to a **resource staging server**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ private[spark] class DriverKubernetesCredentialsStep(
s"$APISERVER_AUTH_DRIVER_MOUNTED_CONF_PREFIX.$CLIENT_CERT_FILE_CONF_SUFFIX")
private val maybeMountedCaCertFile = submissionSparkConf.getOption(
s"$APISERVER_AUTH_DRIVER_MOUNTED_CONF_PREFIX.$CA_CERT_FILE_CONF_SUFFIX")
private val driverServiceAccount = submissionSparkConf.get(KUBERNETES_SERVICE_ACCOUNT_NAME)

override def configureDriver(driverSpec: KubernetesDriverSpec): KubernetesDriverSpec = {
val driverSparkConf = driverSpec.driverSparkConf.clone()
Expand Down Expand Up @@ -81,7 +82,16 @@ private[spark] class DriverKubernetesCredentialsStep(
.endVolume()
.endSpec()
.build()
}.getOrElse(driverSpec.driverPod)
}.getOrElse(
driverServiceAccount.map { account =>
new PodBuilder(driverSpec.driverPod)
.editOrNewSpec()
.withServiceAccount(account)
.withServiceAccountName(account)
.endSpec()
.build()
}.getOrElse(driverSpec.driverPod)
)
val driverContainerWithMountedSecretVolume = kubernetesCredentialsSecret.map { secret =>
new ContainerBuilder(driverSpec.driverContainer)
.addNewVolumeMount()
Expand Down