-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[SPARK-7031][ThriftServer]let thrift server take SPARK_DAEMON_MEMORY and SPARK_DAEMON_JAVA_OPTS #5609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Test build #30664 has finished for PR 5609 at commit
|
|
Test build #30674 has finished for PR 5609 at commit
|
|
Test build #30823 has finished for PR 5609 at commit
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm probably missing basic stuff here but it jumps out at me that you have to special-case thrift server here. Is this really related to spark submit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I think so. If we take a look at start-thriftserver.sh then will find in spark-daemon.sh we use spark-submit to submit the org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.
Then it comes into spark-class, and spark-class use org.apache.spark.launcher.Main to resolve args in which it has:
if (className.equals("org.apache.spark.deploy.SparkSubmit")) {
builder = new SparkSubmitCommandBuilder(args);
} else {
builder = new SparkClassCommandBuilder(className, args);
}
So I think we start Thrift Server by spark-submit but at the meantime Thrift Server is sort of daemon process and it should take daemon related options too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The alternative would be to make this HiveThriftServer2 go through spark-class instead of spark-submit, but I think that would be a bigger change. While I'm not in love of the idea of peppering special cases inside SparkSubmitCommandBuilder, one doesn't sound like a terrible idea. If there are more in the future we can look at a better solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Total agree with you.
|
BTW I have tested on my cluster with setting
in spark-env.sh.
After:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these test changes needed? The tests have been passing without them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No they are not necessary and just left while I fixing the broken tests. I will revert these.
|
Jenkins, retest this please. |
|
Jenkins, test this please. |
1 similar comment
|
Jenkins, test this please. |
|
Test build #728 has finished for PR 5609 at commit
|
|
@vanzin So though it is not a best idea but ok to go? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid the copy & paste, I'd do:
String tsMemory = isThriftServer(mainClass) ? getenv("SPARK_DAEMON_MEMORY") : null;
memory = firstNonEmpty(tsMemory, ...);
|
Just a small nit, but otherwise looks good. |
|
Jenkins, retest this please. |
|
Test build #31373 has finished for PR 5609 at commit
|
|
Jenkins, retest this please. |
|
Test build #31378 has finished for PR 5609 at commit
|
|
Looks like the failed test case is flacky. Jenkins, retest this please. |
|
Test build #31384 has finished for PR 5609 at commit
|
|
Another different case failed. Jenkins, retest this please. |
|
Test build #31389 has finished for PR 5609 at commit
|
|
Test build #31398 has finished for PR 5609 at commit
|
…Y and SPARK_DAEMON_JAVA_OPTS We should let Thrift Server take these two parameters as it is a daemon. And it is better to read driver-related configs as an app submited by spark-submit. https://issues.apache.org/jira/browse/SPARK-7031 Author: WangTaoTheTonic <[email protected]> Closes apache#5609 from WangTaoTheTonic/SPARK-7031 and squashes the following commits: 8d3fc16 [WangTaoTheTonic] indent 035069b [WangTaoTheTonic] better code style d3ddfb6 [WangTaoTheTonic] revert the unnecessary changes in suite 624e652 [WangTaoTheTonic] fix break tests 0565831 [WangTaoTheTonic] fix failed tests 4fb25ed [WangTaoTheTonic] let thrift server take SPARK_DAEMON_MEMORY and SPARK_DAEMON_JAVA_OPTS
…Y and SPARK_DAEMON_JAVA_OPTS We should let Thrift Server take these two parameters as it is a daemon. And it is better to read driver-related configs as an app submited by spark-submit. https://issues.apache.org/jira/browse/SPARK-7031 Author: WangTaoTheTonic <[email protected]> Closes apache#5609 from WangTaoTheTonic/SPARK-7031 and squashes the following commits: 8d3fc16 [WangTaoTheTonic] indent 035069b [WangTaoTheTonic] better code style d3ddfb6 [WangTaoTheTonic] revert the unnecessary changes in suite 624e652 [WangTaoTheTonic] fix break tests 0565831 [WangTaoTheTonic] fix failed tests 4fb25ed [WangTaoTheTonic] let thrift server take SPARK_DAEMON_MEMORY and SPARK_DAEMON_JAVA_OPTS
…Y and SPARK_DAEMON_JAVA_OPTS We should let Thrift Server take these two parameters as it is a daemon. And it is better to read driver-related configs as an app submited by spark-submit. https://issues.apache.org/jira/browse/SPARK-7031 Author: WangTaoTheTonic <[email protected]> Closes apache#5609 from WangTaoTheTonic/SPARK-7031 and squashes the following commits: 8d3fc16 [WangTaoTheTonic] indent 035069b [WangTaoTheTonic] better code style d3ddfb6 [WangTaoTheTonic] revert the unnecessary changes in suite 624e652 [WangTaoTheTonic] fix break tests 0565831 [WangTaoTheTonic] fix failed tests 4fb25ed [WangTaoTheTonic] let thrift server take SPARK_DAEMON_MEMORY and SPARK_DAEMON_JAVA_OPTS
We should let Thrift Server take these two parameters as it is a daemon. And it is better to read driver-related configs as an app submited by spark-submit.
https://issues.apache.org/jira/browse/SPARK-7031