Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Conversation

@mccheah
Copy link

@mccheah mccheah commented Apr 6, 2017

Move packages around to split between v1 work and v2 work. This will make it easier to develop submission v2 incrementally without removing what we already have for v1.

@mccheah
Copy link
Author

mccheah commented Apr 6, 2017

Note that this is merging into #212. @ash211 @foxish @erikerlandson

uploadedJarsBase64Contents: TarGzippedData,
uploadedFilesBase64Contents: TarGzippedData) extends SubmitRestProtocolRequest {
@JsonIgnore
override val messageType: String = s"kubernetes.v1.${Utils.getFormattedClassName(this)}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this to support compat across a span of versions? Maybe a new rest service with an old launcher?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is necessary due to the fact that SubmitRestProtocolRequest expects all subclasses to live in the package org.apache.spark.deploy.rest. The deserialization fails otherwise. This is a workaround to keep the package names consistent for now until we remove this class entirely.

Copy link

@ash211 ash211 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM though I'd rather merge the dependent PR before this one

@ash211
Copy link

ash211 commented Apr 20, 2017

Will merge into branch-2.1-kubernetes later today after #212

@ash211 ash211 changed the base branch from submission-v2-file-server to branch-2.1-kubernetes April 21, 2017 06:16
…packages

 Conflicts:
	resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/rest/kubernetes/v2/ResourceStagingService.scala
	resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/rest/kubernetes/v2/ResourceStagingServiceImpl.scala
	resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/rest/kubernetes/v2/ResourceStagingServerSuite.scala
	resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/rest/kubernetes/v2/ResourceStagingServiceImplSuite.scala
@ash211 ash211 merged commit e24c4af into branch-2.1-kubernetes Apr 21, 2017
foxish pushed a commit that referenced this pull request Jul 24, 2017
* Staging server for receiving application dependencies.

* Move packages around to split between v1 work and v2 work

* Add unit test for file writing

* Remove unnecessary main

* Add back license header

* Minor fixes

* Fix integration test with renamed package for client. Fix scalastyle.

* Force json serialization to consider the different package.

* Revert extraneous log

* Fix scalastyle

* Remove getting credentials from the API

We still want to post them because in the future we can use these
credentials to monitor the API server and handle cleaning up the data
accordingly.

* Generalize to resource staging server outside of Spark

* Update code documentation

* Val instead of var

* Fix build

* Fix naming, remove unused import

* Move suites from integration test package to core

* Use TrieMap instead of locks

* Address comments

* Fix imports

* Change paths, use POST instead of PUT

* Use a resource identifier as well as a resource secret
ifilonenko pushed a commit to ifilonenko/spark that referenced this pull request Feb 26, 2019
puneetloya pushed a commit to puneetloya/spark that referenced this pull request Mar 11, 2019
)

* Staging server for receiving application dependencies.

* Move packages around to split between v1 work and v2 work

* Add unit test for file writing

* Remove unnecessary main

* Add back license header

* Minor fixes

* Fix integration test with renamed package for client. Fix scalastyle.

* Force json serialization to consider the different package.

* Revert extraneous log

* Fix scalastyle

* Remove getting credentials from the API

We still want to post them because in the future we can use these
credentials to monitor the API server and handle cleaning up the data
accordingly.

* Generalize to resource staging server outside of Spark

* Update code documentation

* Val instead of var

* Fix build

* Fix naming, remove unused import

* Move suites from integration test package to core

* Use TrieMap instead of locks

* Address comments

* Fix imports

* Change paths, use POST instead of PUT

* Use a resource identifier as well as a resource secret
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants