diff --git a/src/jekyll/running-on-kubernetes.md b/src/jekyll/running-on-kubernetes.md index d71b9b2..36b4552 100644 --- a/src/jekyll/running-on-kubernetes.md +++ b/src/jekyll/running-on-kubernetes.md @@ -1,7 +1,6 @@ --- layout: global title: Running Spark on Kubernetes -toc: true --- Support for running on [Kubernetes](https://kubernetes.io/docs/whatisk8s/) is available in experimental status. The @@ -24,14 +23,6 @@ should give you a list of pods and configmaps (if any) respectively. [release tarball](https://github.com/apache-spark-on-k8s/spark/releases) or by [building Spark with Kubernetes support](../resource-managers/kubernetes/README.md#building-spark-with-kubernetes-support). -## Current Limitations - -Running Spark on Kubernetes is currently an experimental feature. Some restrictions on the current implementation that -should be lifted in the future include: -* Applications can only run in cluster mode. -* Only Scala and Java applications can be run. - - ## Driver & Executor Images Kubernetes requires users to supply images that can be deployed into containers within pods. The images are built to @@ -45,15 +36,15 @@ If you wish to use pre-built docker images, you may use the images published in
kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-alpha.2kubespark/spark-driver:v2.1.0-kubernetes-0.2.0kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-alpha.2kubespark/spark-executor:v2.1.0-kubernetes-0.2.0kubespark/spark-init:v2.1.0-kubernetes-0.1.0-alpha.2kubespark/spark-init:v2.1.0-kubernetes-0.2.0spark.kubernetes.authenticate.resourceStagingServer.caCertFilespark.kubernetes.authenticate.resourceStagingServer.clientKeyFilespark.kubernetes.authenticate.resourceStagingServer.clientCertFilespark.kubernetes.authenticate.resourceStagingServer.oauthTokenspark.kubernetes.authenticate.resourceStagingServer.oauthTokenFile.
+ spark.kubernetes.authenticate.resourceStagingServer.oauthTokenFilespark.kubernetes.authenticate.resourceStagingServer.oauthToken.
+ spark.kubernetes.authenticate.resourceStagingServer.useServiceAccountCredentials/var/run/secrets/kubernetes.io/serviceaccount/token and the CA certificate located at
+ /var/run/secrets/kubernetes.io/serviceaccount/ca.crt. Note that if
+ spark.kubernetes.authenticate.resourceStagingServer.oauthTokenFile is set, it takes precedence
+ over the usage of the service account token file. Also, if
+ spark.kubernetes.authenticate.resourceStagingServer.caCertFile is set, it takes precedence over using
+ the service account's CA certificate file. This generally should be set to true (the default value) when the
+ resource staging server is deployed as a Kubernetes pod, but should be set to false if the resource staging server
+ is deployed by other means (i.e. when running the staging server process outside of Kubernetes). The resource
+ staging server must have credentials that allow it to view API objects in any namespace.
+ spark.kubernetes.executor.memoryOverheadkey=value.
spark.kubernetes.executor.labelskey=value. Note that Spark also adds its own labels to the
+ executor pods for bookkeeping purposes.
+ spark.kubernetes.executor.annotationskey=value.
+ spark.kubernetes.driver.pod.namespark.kubernetes.docker.image.pullPolicyIfNotPresent