Skip to content

Conversation

@codeboyyong
Copy link
Contributor

All the changes is in the package of "org.apache.spark.deploy.yarn":
1) Throw exception in ClinetArguments and ClientBase instead of exit directly.
2) in Client's main method, if exception is caught, it will exit with code 1, otherwise exit with code 0.

After the fix, if user integrate the spark yarn client into their applications, when the argument is wrong or the running is finished, the application won't be terminated.

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this empty line.

@dbtsai
Copy link
Member

dbtsai commented Apr 23, 2014

Jenkins, add to whitelist.

@pwendell
Copy link
Contributor

Jenkins, test this please.

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build finished. All automated tests passed.

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14350/

@sryza
Copy link
Contributor

sryza commented Apr 30, 2014

This change makes sense to me.

pwendell pushed a commit to pwendell/spark that referenced this pull request May 12, 2014
…sdefined

Replace the check for None Option with isDefined and isEmpty in Scala code

Propose to replace the Scala check for Option "!= None" with Option.isDefined and "=== None" with Option.isEmpty.

I think this, using method call if possible then operator function plus argument, will make the Scala code easier to read and understand.

Pass compile and tests.
@tgravescs
Copy link
Contributor

I think this change would be good too. I think we should also look into what our exit code is and how it would fix into workflow managers like oozie.

@dbtsai
Copy link
Member

dbtsai commented Jun 5, 2014

This looks good to me.

However, we still have more of System.exit in different deployment code; we probably want to review and fix them. This can be a good step!

@mengxr
Copy link
Contributor

mengxr commented Jun 10, 2014

@codeboyyong It is not mergable now. Do you mind merging the master branch and also create a separate PR for branch-0.9?

@codeboyyong
Copy link
Contributor Author

I merged it to the master now. Will do 0.9 soon

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a space after =

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move the . to the newline

@dbtsai
Copy link
Member

dbtsai commented Jun 12, 2014

@mengxr Do you think it's in good shape now? This is the only issue blocking us using vanilla spark. Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

try {

… directly.

All the changes is in  the package of "org.apache.spark.deploy.yarn":
1) Add a ClientException with an exitCode
2) Throws exception in ClinetArguments and ClientBase instead of exit directly
3) in Client's main method, catch exception and exit with the exitCode.

After the fix, if user integrate the spark yarn cline into their applications,
when the argument is wrong or the running is finished, the application will not exit.
And the exit only happens in command line running.
@codeboyyong
Copy link
Contributor Author

@mengxr, I made the change based on your comments.

@mengxr
Copy link
Contributor

mengxr commented Jun 13, 2014

Jenkins, add to whitelist.

@AmplabJenkins
Copy link

Merged build triggered.

@mengxr
Copy link
Contributor

mengxr commented Jun 13, 2014

Jenkins, test this please.

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build finished. All automated tests passed.

1 similar comment
@AmplabJenkins
Copy link

Merged build finished. All automated tests passed.

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15745/

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15744/

@asfgit asfgit closed this in f95ac68 Jun 13, 2014
@mengxr
Copy link
Contributor

mengxr commented Jun 13, 2014

@codeboyyong I've merged this. Could you please make a patch for branch-0.9? Thanks!

@codeboyyong
Copy link
Contributor Author

Sure, will do this at weekend.

On Jun 12, 2014, at 10:13 PM, Xiangrui Meng [email protected] wrote:

@codeboyyong I've merged this. Could you please make a patch for branch-0.9? Thanks!


Reply to this email directly or view it on GitHub.

pdeyhim pushed a commit to pdeyhim/spark-1 that referenced this pull request Jun 25, 2014
… directly.

All the changes is in  the package of "org.apache.spark.deploy.yarn":
    1) Throw exception in ClinetArguments and ClientBase instead of exit directly.
    2) in Client's main method, if exception is caught, it will exit with code 1, otherwise exit with code 0.

After the fix, if user integrate the spark yarn client into their applications, when the argument is wrong or the running is finished, the application won't be terminated.

Author: John Zhao <[email protected]>

Closes apache#490 from codeboyyong/jira_1516_systemexit_inyarnclient and squashes the following commits:

138cb48 [John Zhao] [SPARK-1516]Throw exception in yarn clinet instead of run system.exit directly. All the changes is in  the package of "org.apache.spark.deploy.yarn": 1) Add a ClientException with an exitCode 2) Throws exception in ClinetArguments and ClientBase instead of exit directly 3) in Client's main method, catch exception and exit with the exitCode.
@dbtsai dbtsai deleted the jira_1516_systemexit_inyarnclient branch August 11, 2014 22:35
xiliu82 pushed a commit to xiliu82/spark that referenced this pull request Sep 4, 2014
… directly.

All the changes is in  the package of "org.apache.spark.deploy.yarn":
    1) Throw exception in ClinetArguments and ClientBase instead of exit directly.
    2) in Client's main method, if exception is caught, it will exit with code 1, otherwise exit with code 0.

After the fix, if user integrate the spark yarn client into their applications, when the argument is wrong or the running is finished, the application won't be terminated.

Author: John Zhao <[email protected]>

Closes apache#490 from codeboyyong/jira_1516_systemexit_inyarnclient and squashes the following commits:

138cb48 [John Zhao] [SPARK-1516]Throw exception in yarn clinet instead of run system.exit directly. All the changes is in  the package of "org.apache.spark.deploy.yarn": 1) Add a ClientException with an exitCode 2) Throws exception in ClinetArguments and ClientBase instead of exit directly 3) in Client's main method, catch exception and exit with the exitCode.
andrewor14 pushed a commit to andrewor14/spark that referenced this pull request Jan 8, 2015
…sdefined

Replace the check for None Option with isDefined and isEmpty in Scala code

Propose to replace the Scala check for Option "!= None" with Option.isDefined and "=== None" with Option.isEmpty.

I think this, using method call if possible then operator function plus argument, will make the Scala code easier to read and understand.

Pass compile and tests.
(cherry picked from commit f16c21e)

Signed-off-by: Patrick Wendell <[email protected]>
yifeih pushed a commit to yifeih/spark that referenced this pull request Mar 5, 2019
bzhaoopenstack pushed a commit to bzhaoopenstack/spark that referenced this pull request Sep 11, 2019
Comment out bazel related roles, we use cloud image for KinD test, 
previously the job fails because of bazel installation, the cloud image
uploaded included a successfuly built bazel. lets skip the bazel 
installation and make the job running first. We can refactor the bazel
role latter if required.

Related: theopenlab/openlab#230
turboFei pushed a commit to turboFei/spark that referenced this pull request Nov 6, 2025
… the lock by default for CTAS to reduce HDFS operations (apache#490)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants