-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Benchmark: Benchmark API #5406
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark: Benchmark API #5406
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe bench/abort to match other action names?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
I like this change in general. I think it will make it easier to report and debug performance issues by just providing a reproducible benchmark! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By design, t-digest is supposed to return accurate values for minimum and maximum values (when quantile is 0 or 1) so it shouldn't be needed to track them separately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It didn't occur to me to ask for the zero-th quantile. I'll change this to get rid of min/max.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
This change is quite large, so in order to move forward I would vote to fix the correctness issues that have been reported but to postpone feature requests such as configurable percentiles to later in order to not delay this PR. Since this is a new API, I would also like to mark this feature as experimental in the documentation for now (like we did for the completion suggester) so that we still have the opportunity to break backward compatibility in future releases as we get more feedback on the usage of this feature. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
proper naming here will make the comment redundant
n -> count
m -> mean
s -> variance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
@aleph-zero are you working on an update here? I'd love to look at it once you added and updated commit? |
|
@jpountz How can I mark this as experimental as you suggest? What is the convention for this? Just put a note in the benchmark docs? |
|
@aleph-zero Yes, a note at the beginning of the reference documentation for the benchmark API. You can look at the percentiles documentation for example. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we move this to the new writeDoubleArray methods
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
I left a bunch of smaller comments but I agree with @jpountz lets get those comments fixed and then move it in! |
|
LGTM +1 to squash and push |
Add an API endpoint at /_bench for submitting, listing, and aborting search benchmarks. This API can be used for timing search requests, subject to various user-defined settings. Benchmark results provide summary and detailed statistics on such values as min, max, and mean time. Values are reported per-node so that it is easy to spot outliers. Slow requests are also reported. Long running benchmarks can be viewed with a GET request, or aborted with a POST request. Benchmark results are optionally stored in an index for subsequent analysis. Closes #5407
Add an API endpoint at /_bench for submitting, listing, and aborting
search benchmarks. This API can be used for timing search requests,
subject to various user-defined settings.
Benchmark results provide summary and detailed statistics on such
values as min, max, and mean time. Values are reported per-node so that
it is easy to spot outliers. Slow requests are also reported.
Long running benchmarks can be viewed with a GET request, or aborted
with a POST request.
Benchmark results are optionally stored in an index for subsequent
analysis.
Closes #5407