-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[SPARK-27157][DOCS] Add Executor level metrics to monitoring docs #24090
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
docs/monitoring.md
Outdated
| <td>Elapsed time the JVM spent in garbage collection summed in this Executor. | ||
| The value is expressed in milliseconds.</td> | ||
| </tr> | ||
| </tr> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
two </tr>?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
|
This is probably OK, but are these metrics things that Spark generates or that are generated automatically by Ganglia et al? that is, do we need to document them or point at existing external docs? |
|
Test build #4620 has finished for PR 24090 at commit
|
|
This looks to me a useful addition to the monitoring documentation. |
@srowen They are generated by Spark, see |
|
Thank you @LucaCanali, I've added this metrics collection condition. |
|
Test build #4630 has finished for PR 24090 at commit
|
What changes were proposed in this pull request?
A sub-task of SPARK-23206
Add Executor level metrics to monitoring docs
How was this patch tested?
jekyll