You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fix: prevent sub-batch 413's from blocking whole batch (#972)
* fix: prevent sub-batch 413's from infinitely retrying whole batch
The Http Client breaks batches of actions into sub-batches that are up to 20MB
in size, sending larger actions as batches-of-one, and zips the responses
together to emulate a single batch response from the Elasticsearch API.
When an individual HTTP request is rejected by Elasticsearch (or by an
intermediate proxy or load-balancer) with an HTTP/1.1 413, we can emulate
the error response instead of blowing an exception through to the whole batch.
This allows only the offending events/actions to be subject to retry logic.
Along the way, we improve logging at the `debug` level for sub-batches, and
emit clear `warn`-level logs with payload sizes when we hit HTTP 413 rejections.
* size batch by _decompressed_ payload size
* tests: config elasticsearch to allow wildcard deletes
The default value of Elasticsearch's `action.destructive_requires_rename` has
changed to true in elastic/elasticsearch#66908 which
causes our integration tests' wildcard deletes to fail. By specifying this
config explicitly, we ensure the desired behaviour is selected.
Copy file name to clipboardExpand all lines: CHANGELOG.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,8 @@
1
+
## 10.8.6
2
+
- Fixed an issue where a single over-size event being rejected by Elasticsearch would cause the entire entire batch to be retried indefinitely. The oversize event will still be retried on its own and logging has been improved to include payload sizes in this situation [#972](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/972)
3
+
- Fixed an issue with `http_compression => true` where a well-compressed payload could fit under our outbound 20MB limit but expand beyond Elasticsearch's 100MB limit, causing bulk failures. Bulk grouping is now determined entirely by the decompressed payload size [#823](https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/823)
4
+
- Improved debug-level logging about bulk requests.
5
+
1
6
## 10.8.5
2
7
- Feat: assert returned item count from _bulk [#997](https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/997)
0 commit comments