Skip to content

Conversation

@aleph-zero
Copy link
Contributor

Add an API endpoint at /_bench for submitting, listing, and aborting
search benchmarks. This API can be used for timing search requests,
subject to various user-defined settings.

Benchmark results provide summary and detailed statistics on such
values as min, max, and mean time. Values are reported per-node so that
it is easy to spot outliers. Slow requests are also reported.

Long running benchmarks can be viewed with a GET request, or aborted
with a POST request.

Benchmark results are optionally stored in an index for subsequent
analysis.

Closes #5407

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe bench/abort to match other action names?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@jpountz
Copy link
Contributor

jpountz commented Mar 14, 2014

I like this change in general. I think it will make it easier to report and debug performance issues by just providing a reproducible benchmark!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By design, t-digest is supposed to return accurate values for minimum and maximum values (when quantile is 0 or 1) so it shouldn't be needed to track them separately.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It didn't occur to me to ask for the zero-th quantile. I'll change this to get rid of min/max.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@jpountz
Copy link
Contributor

jpountz commented Mar 20, 2014

This change is quite large, so in order to move forward I would vote to fix the correctness issues that have been reported but to postpone feature requests such as configurable percentiles to later in order to not delay this PR.

Since this is a new API, I would also like to mark this feature as experimental in the documentation for now (like we did for the completion suggester) so that we still have the opportunity to break backward compatibility in future releases as we get more feedback on the usage of this feature.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

proper naming here will make the comment redundant

n -> count
m -> mean
s -> variance

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@s1monw
Copy link
Contributor

s1monw commented Mar 26, 2014

@aleph-zero are you working on an update here? I'd love to look at it once you added and updated commit?

@aleph-zero
Copy link
Contributor Author

@jpountz How can I mark this as experimental as you suggest? What is the convention for this? Just put a note in the benchmark docs?

@jpountz
Copy link
Contributor

jpountz commented Apr 1, 2014

@aleph-zero Yes, a note at the beginning of the reference documentation for the benchmark API. You can look at the percentiles documentation for example.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we move this to the new writeDoubleArray methods

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@s1monw
Copy link
Contributor

s1monw commented Apr 8, 2014

I left a bunch of smaller comments but I agree with @jpountz lets get those comments fixed and then move it in!

@s1monw
Copy link
Contributor

s1monw commented Apr 9, 2014

LGTM +1 to squash and push

Add an API endpoint at /_bench for submitting, listing, and aborting
search benchmarks. This API can be used for timing search requests,
subject to various user-defined settings.

Benchmark results provide summary and detailed statistics on such
values as min, max, and mean time. Values are reported per-node so that
it is easy to spot outliers. Slow requests are also reported.

Long running benchmarks can be viewed with a GET request, or aborted
with a POST request.

Benchmark results are optionally stored in an index for subsequent
analysis.

Closes #5407
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Benchmark API

7 participants