Skip to content

Conversation

@yaooqinn
Copy link
Member

What changes were proposed in this pull request?

Remove "in cluster mode" from the description of spark.executor.memoryOverhead

Why are the changes needed?

fix correctness issue in documentaion

Does this PR introduce any user-facing change?

yes, users may not get confused about the description spark.executor.memoryOverhead

How was this patch tested?

pass GA doc generation

@SparkQA
Copy link

SparkQA commented Nov 10, 2020

Test build #130847 has finished for PR 30311 at commit 3aa18e8.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Nov 10, 2020

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/35455/

@SparkQA
Copy link

SparkQA commented Nov 10, 2020

Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/35455/

@yaooqinn
Copy link
Member Author

retest this please

@SparkQA
Copy link

SparkQA commented Nov 10, 2020

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/35479/

Copy link
Member

@srowen srowen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I believe you're correct.

@SparkQA
Copy link

SparkQA commented Nov 10, 2020

Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/35479/

@SparkQA
Copy link

SparkQA commented Nov 10, 2020

Test build #130872 has finished for PR 30311 at commit 3aa18e8.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@yaooqinn
Copy link
Member Author

retest this please

@SparkQA
Copy link

SparkQA commented Nov 11, 2020

Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/35505/

@SparkQA
Copy link

SparkQA commented Nov 11, 2020

Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/35505/

@SparkQA
Copy link

SparkQA commented Nov 11, 2020

Test build #130899 has finished for PR 30311 at commit 3aa18e8.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@yaooqinn
Copy link
Member Author

cc @cloud-fan @maropu too~

private[spark] val EXECUTOR_MEMORY_OVERHEAD = ConfigBuilder("spark.executor.memoryOverhead")
.doc("The amount of non-heap memory to be allocated per executor in cluster mode, " +
"in MiB unless otherwise specified.")
.doc("The amount of non-heap memory to be allocated per executor, in MiB unless otherwise" +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change means that the non-cluster mode refers to this value?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, the 'client' deploy mode respects this too

@maropu maropu closed this in 4335af0 Nov 12, 2020
maropu pushed a commit that referenced this pull request Nov 12, 2020
### What changes were proposed in this pull request?

Remove "in cluster mode" from the description of `spark.executor.memoryOverhead`

### Why are the changes needed?

fix correctness issue in documentaion

### Does this PR introduce _any_ user-facing change?

yes, users may not get confused about the description `spark.executor.memoryOverhead`

### How was this patch tested?

pass GA doc generation

Closes #30311 from yaooqinn/minordoc.

Authored-by: Kent Yao <[email protected]>
Signed-off-by: Takeshi Yamamuro <[email protected]>
(cherry picked from commit 4335af0)
Signed-off-by: Takeshi Yamamuro <[email protected]>
@maropu
Copy link
Member

maropu commented Nov 12, 2020

Thanks! Merged to master/branch-3.0. Could you open a PR to backport this into branch-2.4?

@yaooqinn
Copy link
Member Author

yea, thanks for merging

@yaooqinn
Copy link
Member Author

@maropu the description for spark.executor.memoryOverhead is fine in http://spark.apache.org/docs/2.4.7/configuration.html. Then it is not that necessary to backport this although it's wrong in the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants