-
Notifications
You must be signed in to change notification settings - Fork 25.6k
[ML] fail inference processor more consistently on certain error types #81475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] fail inference processor more consistently on certain error types #81475
Conversation
|
Pinging @elastic/ml-core (Team:ML) |
| if (unwrapped instanceof ElasticsearchStatusException) { | ||
| ElasticsearchStatusException ex = (ElasticsearchStatusException) unwrapped; | ||
| if (ex.status().equals(RestStatus.TOO_MANY_REQUESTS)) { | ||
| if (unwrapped instanceof ElasticsearchException ex) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Java 17 🥳
| ElasticsearchStatusException ex = (ElasticsearchStatusException) unwrapped; | ||
| if (ex.status().equals(RestStatus.TOO_MANY_REQUESTS)) { | ||
| if (unwrapped instanceof ElasticsearchException ex) { | ||
| if (FAILURE_STATUSES.contains(ex.status())) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the changes in FillMaskProcessor it makes more sense for individual processors to decide which errors are exceptions and which are WarningInferenceResults. I think this logic and be removed as the processor and deployment manager know best what is an error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidkyle this assumes that we catch and return a warning down at the task level. I can do that but its a larger refactor.
davidkyle
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…ference-failure-solidification
💔 Backport failed
You can use sqren/backport to manually backport by running |
elastic#81475) This updates the following scenarios and causes NER/native inference to fail and not write a warning: - missing vocabulary values - missing model/deployment - native process failed - native process stopping - request timed out - misconfigured inference task update type
…r types (#81475) (#81546) * [ML] fail inference processor more consistently on certain error types (#81475) This updates the following scenarios and causes NER/native inference to fail and not write a warning: - missing vocabulary values - missing model/deployment - native process failed - native process stopping - request timed out - misconfigured inference task update type * fixing for backport * fixing backport * fixing backport
This updates the following scenarios and causes NER/native inference to fail and not write a warning: