-
Notifications
You must be signed in to change notification settings - Fork 117
Documentation for the current state of the world #16
Changes from all commits
be312e0
cb90f64
229c5a8
ab8436a
6976b81
420111b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,224 @@ | ||
| --- | ||
| layout: global | ||
| title: Running Spark on Kubernetes | ||
| --- | ||
|
|
||
| Support for running on [Kubernetes](https://kubernetes.io/) is available in experimental status. The feature set is | ||
| currently limited and not well-tested. This should not be used in production environments. | ||
|
|
||
| ## Setting Up Docker Images | ||
|
|
||
| Kubernetes requires users to supply images that can be deployed into containers within pods. The images are built to | ||
| be run in a container runtime environment that Kubernetes supports. Docker is a container runtime environment that is | ||
| frequently used with Kubernetes, so Spark provides some support for working with Docker to get started quickly. | ||
|
|
||
| To use Spark on Kubernetes with Docker, images for the driver and the executors need to built and published to an | ||
| accessible Docker registry. Spark distributions include the Docker files for the driver and the executor at | ||
| `dockerfiles/driver/Dockerfile` and `docker/executor/Dockerfile`, respectively. Use these Docker files to build the | ||
| Docker images, and then tag them with the registry that the images should be sent to. Finally, push the images to the | ||
| registry. | ||
|
|
||
| For example, if the registry host is `registry-host` and the registry is listening on port 5000: | ||
|
|
||
| cd $SPARK_HOME | ||
| docker build -t registry-host:5000/spark-driver:latest -f dockerfiles/driver/Dockerfile . | ||
| docker build -t registry-host:5000/spark-executor:latest -f dockerfiles/executor/Dockerfile . | ||
| docker push registry-host:5000/spark-driver:latest | ||
| docker push registry-host:5000/spark-executor:latest | ||
|
|
||
| ## Submitting Applications to Kubernetes | ||
|
|
||
| Kubernetes applications can be executed via `spark-submit`. For example, to compute the value of pi, assuming the images | ||
| are set up as described above: | ||
|
|
||
| bin/spark-submit | ||
| --deploy-mode cluster | ||
| --class org.apache.spark.examples.SparkPi | ||
| --master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Btw is the https necessary?
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is for now, later we want to remove it and make https the default, but optionally the user can fill in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Right now these are supported:
In #19 we want to additionally make this supported:
|
||
| --kubernetes-namespace default | ||
| --conf spark.executor.instances=5 | ||
| --conf spark.app.name=spark-pi | ||
| --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest | ||
| --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest | ||
| examples/jars/spark_2.11-2.2.0.jar | ||
|
|
||
| <!-- TODO master should default to https if no scheme is specified --> | ||
| The Spark master, specified either via passing the `--master` command line argument to `spark-submit` or by setting | ||
| `spark.master` in the application's configuration, must be a URL with the format `k8s://<api_server_url>`. Prefixing the | ||
| master string with `k8s://` will cause the Spark application to launch on the Kubernetes cluster, with the API server | ||
| being contacted at `api_server_url`. The HTTP protocol must also be specified. | ||
|
|
||
| Note that applications can currently only be executed in cluster mode, where the driver and its executors are running on | ||
| the cluster. | ||
|
|
||
| ### Adding Other JARs | ||
|
|
||
| Spark allows users to provide dependencies that are bundled into the driver's Docker image, or that are on the local | ||
| disk of the submitter's machine. These two types of dependencies are specified via different configuration options to | ||
| `spark-submit`: | ||
|
|
||
| * Local jars provided by specifying the `--jars` command line argument to `spark-submit`, or by setting `spark.jars` in | ||
| the application's configuration, will be treated as jars that are located on the *disk of the driver Docker | ||
| container*. This only applies to jar paths that do not specify a scheme or that have the scheme `file://`. Paths with | ||
| other schemes are fetched from their appropriate locations. | ||
| * Local jars provided by specifying the `--upload-jars` command line argument to `spark-submit`, or by setting | ||
| `spark.kubernetes.driver.uploads.jars` in the application's configuration, will be treated as jars that are located on | ||
| the *disk of the submitting machine*. These jars are uploaded to the driver docker container before executing the | ||
| application. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "and are placed on the driver's classpath" |
||
| <!-- TODO support main resource bundled in the Docker image --> | ||
| * A main application resource path that does not have a scheme or that has the scheme `file://` is assumed to be on the | ||
|
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It would be nice to be able to specify a main application resource on the container's disk as well. The main trouble here is how to specify that: do we create a new custom file scheme, like There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. for sake of usability, I believe we should support the
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep - it's not particularly great that the API is to expect a magical prefix but I don't see a better option.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Docker isn't the only runtime that is supported (although it is the most common), so, we could opt for something neutral like There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good point @foxish -- I like
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we were to include |
||
| *disk of the submitting machine*. This resource is uploaded to the driver docker container before executing the | ||
| application. A remote path can still be specified and the resource will be fetched from the appropriate location. | ||
|
|
||
| In all of these cases, the jars are placed on the driver's classpath, and are also sent to the executors. Below are some | ||
| examples of providing application dependencies. | ||
|
|
||
| To submit an application with both the main resource and two other jars living on the submitting user's machine: | ||
|
|
||
| bin/spark-submit | ||
| --deploy-mode cluster | ||
| --class com.example.applications.SampleApplication | ||
| --master k8s://https://192.168.99.100 | ||
| --kubernetes-namespace default | ||
| --upload-jars /home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar | ||
| --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest | ||
| --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest | ||
| /home/exampleuser/exampleapplication/main.jar | ||
|
|
||
| Note that since passing the jars through the `--upload-jars` command line argument is equivalent to setting the | ||
| `spark.kubernetes.driver.uploads.jars` Spark property, the above will behave identically to this command: | ||
|
|
||
| bin/spark-submit | ||
| --deploy-mode cluster | ||
| --class com.example.applications.SampleApplication | ||
| --master k8s://https://192.168.99.100 | ||
| --kubernetes-namespace default | ||
| --conf spark.kubernetes.driver.uploads.jars=/home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar | ||
| --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest | ||
| --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest | ||
| /home/exampleuser/exampleapplication/main.jar | ||
|
|
||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. new paragraph: note that If we can bold those lines in both indentations that would be nice too
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Tricky to do both preformatted text and bolding in the same line; we'd have to invoke HTML for that. I think this is fine as is really. |
||
| To specify a main application resource that can be downloaded from an HTTP service, and if a plugin for that application | ||
| is located in the jar `/opt/spark-plugins/app-plugin.jar` on the docker image's disk: | ||
|
|
||
| bin/spark-submit | ||
| --deploy-mode cluster | ||
| --class com.example.applications.PluggableApplication | ||
| --master k8s://https://192.168.99.100 | ||
| --kubernetes-namespace default | ||
| --jars /opt/spark-plugins/app-plugin.jar | ||
| --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest | ||
| --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest | ||
| http://example.com:8080/applications/sparkpluggable/app.jar | ||
|
|
||
| Note that since passing the jars through the `--jars` command line argument is equivalent to setting the `spark.jars` | ||
| Spark property, the above will behave identically to this command: | ||
|
|
||
| bin/spark-submit | ||
| --deploy-mode cluster | ||
| --class com.example.applications.PluggableApplication | ||
| --master k8s://https://192.168.99.100 | ||
| --kubernetes-namespace default | ||
| --conf spark.jars=file:///opt/spark-plugins/app-plugin.jar | ||
| --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest | ||
| --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest | ||
| http://example.com:8080/applications/sparkpluggable/app.jar | ||
|
|
||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. same callout paragraph afterwards:
|
||
| ### Spark Properties | ||
|
|
||
| Below are some other common properties that are specific to Kubernetes. Most of the other configurations are the same | ||
| from the other deployment modes. See the [configuration page](configuration.html) for more information on those. | ||
|
|
||
| <table class="table"> | ||
| <tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr> | ||
| <tr> | ||
| <td><code>spark.kubernetes.namespace</code></td> | ||
| <!-- TODO set default to "default" --> | ||
| <td>(none)</td> | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. do the other docs use
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should probably just default to
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. +1 We should default to the |
||
| <td> | ||
| The namespace that will be used for running the driver and executor pods. Must be specified. When using | ||
| <code>spark-submit</code> in cluster mode, this can also be passed to <code>spark-submit</code> via the | ||
| <code>--kubernetes-namespace</code> command line argument. | ||
| </td> | ||
| </tr> | ||
| <tr> | ||
| <td><code>spark.kubernetes.driver.docker.image</code></td> | ||
| <td><code>spark-driver:2.2.0</code></td> | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. let's change this from
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Currently the code refers to
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm ok with 2.2.0 for now but would be happy to hear suggestions in general. I think this will strictly depend on what we end up doing for publishing Docker images, if we publish at all. If we don't publish at all though then |
||
| <td> | ||
| Docker image to use for the driver. Specify this using the standard | ||
| <a href="https://docs.docker.com/engine/reference/commandline/tag/">Docker tag</a> format. | ||
| </td> | ||
| </tr> | ||
| <tr> | ||
| <td><code>spark.kubernetes.executor.docker.image</code></td> | ||
| <td><code>spark-executor:2.2.0</code></td> | ||
| <td> | ||
| Docker image to use for the executors. Specify this using the standard | ||
| <a href="https://docs.docker.com/engine/reference/commandline/tag/">Docker tag</a> format. | ||
| </td> | ||
| </tr> | ||
| <tr> | ||
| <td><code>spark.kubernetes.submit.caCertFile</code></td> | ||
| <td>(none)</td> | ||
| <td> | ||
| CA cert file for connecting to Kubernetes over SSL. This file should be located on the submitting machine's disk. | ||
| </td> | ||
| </tr> | ||
| <tr> | ||
| <td><code>spark.kubernetes.submit.clientKeyFile</code></td> | ||
| <td>(none)</td> | ||
| <td> | ||
| Client key file for authenticating against the Kubernetes API server. This file should be located on the submitting | ||
| machine's disk. | ||
| </td> | ||
| </tr> | ||
| <tr> | ||
| <td><code>spark.kubernetes.submit.clientCertFile</code></td> | ||
| <td>(none)</td> | ||
| <td> | ||
| Client cert file for authenticating against the Kubernetes API server. This file should be located on the submitting | ||
| machine's disk. | ||
| </td> | ||
| </tr> | ||
| <tr> | ||
| <td><code>spark.kubernetes.submit.serviceAccountName</code></td> | ||
| <td><code>default</code></td> | ||
| <td> | ||
| Service account that is used when running the driver pod. The driver pod uses this service account when requesting | ||
| executor pods from the API server. | ||
| </td> | ||
| </tr> | ||
| <tr> | ||
| <td><code>spark.kubernetes.driver.uploads.jars</code></td> | ||
| <td>(none)</td> | ||
| <td> | ||
| Comma-separated list of jars to sent to the driver and all executors when submitting the application in cluster | ||
| mode. Refer to <a href="running-on-kubernetes.html#adding-other-jars">adding other jars</a> for more information. | ||
| </td> | ||
| </tr> | ||
| <tr> | ||
| <!-- TODO remove this functionality --> | ||
| <td><code>spark.kubernetes.driver.uploads.driverExtraClasspath</code></td> | ||
|
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We probably shouldn't have this, I don't know how common using this will be. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I believe this is fine. |
||
| <td>(none)</td> | ||
| <td> | ||
| Comma-separated list of jars to be sent to the driver only when submitting the application in cluster mode. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. is that bit about only when submitting the application in cluster mode relevant? k8s only supports cluster mode now
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In client mode when we do end up supporting it, this won't really apply there. I think this is fine even if redundant as we don't have to change this part of the docs down the road. |
||
| </td> | ||
| </tr> | ||
| <tr> | ||
| <td><code>spark.kubernetes.executor.memoryOverhead</code></td> | ||
| <td>executorMemory * 0.10, with minimum of 384 </td> | ||
| <td> | ||
| The amount of off-heap memory (in megabytes) to be allocated per executor. This is memory that accounts for things | ||
| like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size | ||
| (typically 6-10%). | ||
| </td> | ||
| </tr> | ||
| </table> | ||
|
|
||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Put an example here where we try to simplify the spark-submit command as much possible. Something like: For example, the first example above can be rewritten from: to with these contents in
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hm, I'm not sure what this extra example gets us that would be helpful for usage that we don't have already - can you elaborate here what the goal is? Having to include spark-defaults.conf as well only makes it seem even more confusing. I mainly modeled the examples off of here: https://spark.apache.org/docs/latest/submitting-applications.html |
||
| ## Current Limitations | ||
|
|
||
| Running Spark on Kubernetes is currently an experimental feature. Some restrictions on the current implementation that | ||
| should be lifted in the future include: | ||
| * Applications can only use a fixed number of executors. Dynamic allocation is not supported. | ||
| * Applications can only run in cluster mode. | ||
| * Only Scala and Java applications can be run. | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cd $SPARK_HOME/dist ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is documentation under the assumption that Spark was unpacked from a tarball, such as when it is downloaded from the Spark website.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also record dev-workflow docs somewhere, these aren't included in this PR just yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I totally agree.