This repository was archived by the owner on Jan 9, 2020. It is now read-only.
forked from apache/spark
-
Notifications
You must be signed in to change notification settings - Fork 117
Prep for first release #89
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Don't hold the raw secret bytes - Add CPU limits and requests
The build process fails ScalaStyle checks otherwise.
* Use tar and gzip to archive shipped jars. * Address comments * Move files to resolve merge
* Use alpine and java 8 for docker images. * Remove installation of vim and redundant comment
* Error messages when the driver container fails to start. * Fix messages a bit * Use timeout constant * Delete the pod if it fails for any reason (not just timeout) * Actually set submit succeeded * Fix typo
* Documentation for the current state of the world. * Adding navigation links from other pages * Address comments, add TODO for things that should be fixed * Address comments, mostly making images section clearer * Virtual runtime -> container runtime
#20) * Development workflow documentation for the current state of the world. * Address comments. * Clarified code change and added ticket link
* Added service name as prefix to executor pods to be able to tell them apart from kubectl output * Addressed comments
* Add kubernetes profile to travis yml file * Fix long lines in CompressionUtils.scala
* Improved the example commands in running-on-k8s document. * Fixed more example commands. * Fixed typo.
* Support custom labels on the driver pod. * Add integration test and fix logic. * Fix tests * Fix minor formatting mistake * Reduce unnecessary diff
* A number of small tweaks to the MVP. - Master protocol defaults to https if not specified - Removed upload driver extra classpath functionality - Added ability to specify main app resource with container:// URI - Updated docs to reflect all of the above - Add examples to Docker images, mostly for integration testing but could be useful for easily getting started without shipping anything * Add example to documentation.
* Support setting the driver pod launching timeout. And increase the default value from 30s to 60s. The current value of 30s is kind of short for pulling the image from public docker registry plus the container/JVM start time. * Use a better name for the default timeout.
* Use "extraTestArgLine" to pass extra options to scalatest. Because the "argLine" option of scalatest is set in pom.xml and we can't overwrite it from the command line. Ref #37 * Added a default value for extraTestArgLine * Use a better name. * Added a tip for this in the dev docs.
* Fixed k8s integration test - Enable spark ui explicitly for in-process submit - Fixed some broken assertions in integration tests - Fixed a scalastyle error in SparkDockerImageBuilder.scala - Log into target/integration-tests.log like other modules * Fixed line length. * CR
* Create README to better describe project purpose * Add links to usage guide and dev docs * Minor changes
…it jars (#30) * Revamp ports and service setup for the driver. - Expose the driver-submission service on NodePort and contact that as opposed to going through the API server proxy - Restrict the ports that are exposed on the service to only the driver submission service when uploading content and then only the Spark UI after the job has started * Move service creation down and more thorough error handling * Fix missed merge conflict * Add braces * Fix bad merge * Address comments and refactor run() more. Method nesting was getting confusing so pulled out the inner class and removed the extra method indirection from createDriverPod() * Remove unused method * Support SSL configuration for the driver application submission (#49) * Support SSL when setting up the driver. The user can provide a keyStore to load onto the driver pod and the driver pod will use that keyStore to set up SSL on its server. * Clean up SSL secrets after finishing submission. We don't need to persist these after the pod has them mounted and is running already. * Fix compilation error * Revert image change * Address comments * Programmatically generate certificates for integration tests. * Address comments * Resolve merge conflicts * Fix bad merge * Remove unnecessary braces * Fix compiler error
* Extract constants and config into separate file. Launch => Submit. * Address comments * A small shorthand * Refactor more ThreadUtils * Fix scalastyle, use cached thread pool * Tiny Scala style change
* Retry the submit-application request to multiple nodes. * Fix doc style comment * Check node unschedulable, log retry failures
* Allow adding arbitrary files * Address comments and add documentation
* Introduce blocking submit to kubernetes by default Two new configuration settings: - spark.kubernetes.submit.waitAppCompletion - spark.kubernetes.report.interval * Minor touchups * More succinct logging for pod state * Fix import order * Switch to watch-based logging * Spaces in comma-joined volumes, labels, and containers * Use CountDownLatch instead of SettableFuture * Match parallel ConfigBuilder style * Disable logging in fire-and-forget mode Which is enabled with spark.kubernetes.submit.waitAppCompletion=false (default: true) * Additional log line for when application is launched * Minor wording changes * More logging * Drop log to DEBUG
Since the example job are patched to never finish.
Author
|
Closing -- will redo this after code freeze. Note that changing the branch name and its base will require all in-progress PRs to adjust their destination branch. |
Author
|
Also update on this as well -- I think there's a use case for continuing to do development while a release is being stabilized (the time between code freeze and release of a new version). Likely we should have a |
Merged
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Target state:
branch-2.1before they're in a 2.1.x release)v2.1.0-kubernetes-0.1.0branch-2.1-kubernetesv2.X.Y-kubernetes-0.1.1orv2.X.Y-kubernetes-0.2.0depending on the magnitude of the releasev2.X.Y-kubernetes-1.0.0v2.1.1tag intobranch-2.1-kubernetesin a PR (with code review) and continue work in the new branchbranch-2.2-kubernetesbranch off thev2.2.0 tagand cherry pick (withgit rebase) the k8s patchset onto the new branchbranch-2.1-kubernetesnot inbranch-2.1branch-2.2-kubernetesandbranch-2.1-kubernetesbranches allows us to support both Spark 2.1.x and 2.2.x if we choose (we're not yet committing to support multiple Spark versions)This PR created via:
git checkout v2.1.0git checkout -b branch-2.1-kubernetesgit checkout k8s-support-alternate-incrementalgit checkout -b prep-for-alpha-releasegit rebase --onto branch-2.1-kubernetes origin/master prep-for-alpha-release