@@ -129,21 +129,23 @@ public class Netty4HttpServerTransport extends AbstractLifecycleComponent implem
129129 public static Setting <Integer > SETTING_HTTP_NETTY_MAX_COMPOSITE_BUFFER_COMPONENTS =
130130 new Setting <>(SETTING_KEY_HTTP_NETTY_MAX_COMPOSITE_BUFFER_COMPONENTS , (s ) -> {
131131 ByteSizeValue maxContentLength = SETTING_HTTP_MAX_CONTENT_LENGTH .get (s );
132- // Netty accumulates buffers containing data from all incoming network packets that make up one HTTP request in an instance of
133- // io.netty.buffer.CompositeByteBuf (think of it as a buffer of buffers). Once its capacity is reached, the buffer will iterate
134- // over its individual entries and put them into larger buffers (see io.netty.buffer.CompositeByteBuf#consolidateIfNeeded()
135- // for implementation details). We want to to resize that buffer because this leads to additional garbage on the heap and also
136- // increases the application's native memory footprint (as direct byte buffers hold their contents off-heap).
137- //
138- // With this setting we control the CompositeByteBuf's capacity (which is by default 1024, see
139- // io.netty.handler.codec.MessageAggregator#DEFAULT_MAX_COMPOSITEBUFFER_COMPONENTS). To determine a proper default capacity for
140- // that buffer, we need to consider that the upper bound for the size of HTTP requests is determined by `maxContentLength`. The
141- // number of buffers that are needed depend on how often Netty reads network packets which depends on the network type (MTU).
142- // We assume here that Elasticsearch receives HTTP requests via an Ethernet connection which has a MTU of 1500 bytes.
143- //
144- // Note that we are *not* pre-allocating any memory based on this setting but rather determine the CompositeByteBuf's capacity.
145- // The tradeoff is between less (but larger) buffers that are contained in the CompositeByteBuf and more (but smaller) buffers.
146- // With the default max content length of 100MB and a MTU of 1500 bytes we would allow 69905 entries.
132+ /*
133+ * Netty accumulates buffers containing data from all incoming network packets that make up one HTTP request in an instance of
134+ * io.netty.buffer.CompositeByteBuf (think of it as a buffer of buffers). Once its capacity is reached, the buffer will iterate
135+ * over its individual entries and put them into larger buffers (see io.netty.buffer.CompositeByteBuf#consolidateIfNeeded()
136+ * for implementation details). We want to to resize that buffer because this leads to additional garbage on the heap and also
137+ * increases the application's native memory footprint (as direct byte buffers hold their contents off-heap).
138+ *
139+ * With this setting we control the CompositeByteBuf's capacity (which is by default 1024, see
140+ * io.netty.handler.codec.MessageAggregator#DEFAULT_MAX_COMPOSITEBUFFER_COMPONENTS). To determine a proper default capacity for
141+ * that buffer, we need to consider that the upper bound for the size of HTTP requests is determined by `maxContentLength`. The
142+ * number of buffers that are needed depend on how often Netty reads network packets which depends on the network type (MTU).
143+ * We assume here that Elasticsearch receives HTTP requests via an Ethernet connection which has a MTU of 1500 bytes.
144+ *
145+ * Note that we are *not* pre-allocating any memory based on this setting but rather determine the CompositeByteBuf's capacity.
146+ * The tradeoff is between less (but larger) buffers that are contained in the CompositeByteBuf and more (but smaller) buffers.
147+ * With the default max content length of 100MB and a MTU of 1500 bytes we would allow 69905 entries.
148+ */
147149 long maxBufferComponentsEstimate = Math .round ((double ) (maxContentLength .getBytes () / MTU_ETHERNET .getBytes ()));
148150 // clamp value to the allowed range
149151 long maxBufferComponents = Math .max (2 , Math .min (maxBufferComponentsEstimate , Integer .MAX_VALUE ));
0 commit comments