Skip to content

Set parameter search.max_buckets in request level #68504

@maosuhan

Description

@maosuhan

Issue

In our production environment, a big query with high cardinality may destroy the cluster. After this #57042, search.max_buckets only take effects in coordinator reduce phase. So we could only rely on circuit breaker to break the running big query in shard level.

We we encounter circuit breaker in shard level, it take long time to exceed the memory limit and in most case, parent circuit breaker which take real memory into account will break first. In our test, it take 1 minute and 36 seconds to exceed the limit and have huge impact of CPU and memory.

The most concern about the search.max_buckets is that we cannot calculate the size of each bucket so we cannot calculate the memory footprint well.

Feature

If we can make search.max_buckets in request parameter like http://endpoint/index/_search?search_max_buckets=1000000
Then user can control this size in client side because user know how many terms and metrics in each bucket and user can adjust the value flexibly.

For example, some user want to quickly break the big query within 10 seconds then he can set this value to a small value. Some user may tolerate more waiting time, then he can adjust this value to larger number.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions