From b6dd51a723eb1c700d3ec0e9a1e252e21064515b Mon Sep 17 00:00:00 2001 From: foxish Date: Fri, 3 Mar 2017 14:43:30 -0800 Subject: [PATCH 1/2] Adding prerequisites --- docs/running-on-kubernetes.md | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md index d024d427fea97..b238befa48a28 100644 --- a/docs/running-on-kubernetes.md +++ b/docs/running-on-kubernetes.md @@ -3,9 +3,15 @@ layout: global title: Running Spark on Kubernetes --- -Support for running on [Kubernetes](https://kubernetes.io/) is available in experimental status. The feature set is +Support for running on [Kubernetes](https://kubernetes.io/docs/whatisk8s/) is available in experimental status. The feature set is currently limited and not well-tested. This should not be used in production environments. +## Prerequisites + +* You must have a running Kubernetes cluster with access configured to it using [kubectl](https://kubernetes.io/docs/user-guide/prereqs/). If you do not already ahve a working Kubernetes cluster, you may setup a test cluster on your local machine using [minikube](https://kubernetes.io/docs/getting-started-guides/minikube/). +* You must have appropriate permissions to create and list pods & nodes on the cluster. You can verify that you can list these resources by running `kubectl get nodes` and `kubectl get pods` which should give you a list of nodes and pods (if any) respectively. +* You must have an uncompressed spark distribution with Kubernetes support, or build one from [sources](https://github.com/apache-spark-on-k8s/spark). + ## Setting Up Docker Images Kubernetes requires users to supply images that can be deployed into containers within pods. The images are built to @@ -81,7 +87,7 @@ the driver container as a [secret volume](https://kubernetes.io/docs/user-guide/ ### Kubernetes Clusters and the authenticated proxy endpoint Spark-submit also supports submission through the -[local kubectl proxy](https://kubernetes.io/docs/user-guide/connecting-to-applications-proxy/). One can use the +[local kubectl proxy](https://kubernetes.io/docs/user-guide/accessing-the-cluster/#using-kubectl-proxy). One can use the authenticating proxy to communicate with the api server directly without passing credentials to spark-submit. The local proxy can be started by running: From 813f02bc4267c746d270701b3ac559fb25917edd Mon Sep 17 00:00:00 2001 From: foxish Date: Fri, 3 Mar 2017 14:57:47 -0800 Subject: [PATCH 2/2] address comments --- docs/running-on-kubernetes.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md index b238befa48a28..6199d45fd845e 100644 --- a/docs/running-on-kubernetes.md +++ b/docs/running-on-kubernetes.md @@ -8,9 +8,9 @@ currently limited and not well-tested. This should not be used in production env ## Prerequisites -* You must have a running Kubernetes cluster with access configured to it using [kubectl](https://kubernetes.io/docs/user-guide/prereqs/). If you do not already ahve a working Kubernetes cluster, you may setup a test cluster on your local machine using [minikube](https://kubernetes.io/docs/getting-started-guides/minikube/). -* You must have appropriate permissions to create and list pods & nodes on the cluster. You can verify that you can list these resources by running `kubectl get nodes` and `kubectl get pods` which should give you a list of nodes and pods (if any) respectively. -* You must have an uncompressed spark distribution with Kubernetes support, or build one from [sources](https://github.com/apache-spark-on-k8s/spark). +* You must have a running Kubernetes cluster with access configured to it using [kubectl](https://kubernetes.io/docs/user-guide/prereqs/). If you do not already have a working Kubernetes cluster, you may setup a test cluster on your local machine using [minikube](https://kubernetes.io/docs/getting-started-guides/minikube/). +* You must have appropriate permissions to create and list [pods](https://kubernetes.io/docs/user-guide/pods/), [nodes](https://kubernetes.io/docs/admin/node/) and [services](https://kubernetes.io/docs/user-guide/services/) in your cluster. You can verify that you can list these resources by running `kubectl get nodes`, `kubectl get pods` and `kubectl get svc` which should give you a list of nodes, pods and services (if any) respectively. +* You must have an extracted spark distribution with Kubernetes support, or build one from [source](https://github.com/apache-spark-on-k8s/spark). ## Setting Up Docker Images