Skip to content

Commit 091a3bc

Browse files
jdamato-fslyNipaLocal
authored andcommitted
docs: networking: Describe irq suspension
Describe irq suspension, the epoll ioctls, and the tradeoffs of using different gro_flush_timeout values. Signed-off-by: Joe Damato <[email protected]> Co-developed-by: Martin Karsten <[email protected]> Signed-off-by: Martin Karsten <[email protected]> Reviewed-by: Bagas Sanjaya <[email protected]> Signed-off-by: NipaLocal <nipa@local>
1 parent 6e70519 commit 091a3bc

File tree

1 file changed

+130
-2
lines changed

1 file changed

+130
-2
lines changed

Documentation/networking/napi.rst

Lines changed: 130 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -192,6 +192,28 @@ is reused to control the delay of the timer, while
192192
``napi_defer_hard_irqs`` controls the number of consecutive empty polls
193193
before NAPI gives up and goes back to using hardware IRQs.
194194

195+
The above parameters can also be set on a per-NAPI basis using netlink via
196+
netdev-genl. This can be done programmatically in a user application or by
197+
using a script included in the kernel source tree: ``tools/net/ynl/cli.py``.
198+
199+
For example, using the script:
200+
201+
.. code-block:: bash
202+
203+
$ kernel-source/tools/net/ynl/cli.py \
204+
--spec Documentation/netlink/specs/netdev.yaml \
205+
--do napi-set \
206+
--json='{"id": 345,
207+
"defer-hard-irqs": 111,
208+
"gro-flush-timeout": 11111}'
209+
210+
Similarly, the parameter ``irq-suspend-timeout`` can be set using netlink
211+
via netdev-genl. There is no global sysfs parameter for this value.
212+
213+
``irq_suspend_timeout`` is used to determine how long an application can
214+
completely suspend IRQs. It is used in combination with SO_PREFER_BUSY_POLL,
215+
which can be set on a per-epoll context basis with ``EPIOCSPARAMS`` ioctl.
216+
195217
.. _poll:
196218

197219
Busy polling
@@ -207,6 +229,46 @@ selected sockets or using the global ``net.core.busy_poll`` and
207229
``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
208230
also exists.
209231

232+
epoll-based busy polling
233+
------------------------
234+
235+
It is possible to trigger packet processing directly from calls to
236+
``epoll_wait``. In order to use this feature, a user application must ensure
237+
all file descriptors which are added to an epoll context have the same NAPI ID.
238+
239+
If the application uses a dedicated acceptor thread, the application can obtain
240+
the NAPI ID of the incoming connection using SO_INCOMING_NAPI_ID and then
241+
distribute that file descriptor to a worker thread. The worker thread would add
242+
the file descriptor to its epoll context. This would ensure each worker thread
243+
has an epoll context with FDs that have the same NAPI ID.
244+
245+
Alternatively, if the application uses SO_REUSEPORT, a bpf or ebpf program be
246+
inserted to distribute incoming connections to threads such that each thread is
247+
only given incoming connections with the same NAPI ID. Care must be taken to
248+
carefully handle cases where a system may have multiple NICs.
249+
250+
In order to enable busy polling, there are two choices:
251+
252+
1. ``/proc/sys/net/core/busy_poll`` can be set with a time in useconds to busy
253+
loop waiting for events. This is a system-wide setting and will cause all
254+
epoll-based applications to busy poll when they call epoll_wait. This may
255+
not be desirable as many applications may not have the need to busy poll.
256+
257+
2. Applications using recent kernels can issue an ioctl on the epoll context
258+
file descriptor to set (``EPIOCSPARAMS``) or get (``EPIOCGPARAMS``) ``struct
259+
epoll_params``:, which user programs can define as follows:
260+
261+
.. code-block:: c
262+
263+
struct epoll_params {
264+
uint32_t busy_poll_usecs;
265+
uint16_t busy_poll_budget;
266+
uint8_t prefer_busy_poll;
267+
268+
/* pad the struct to a multiple of 64bits */
269+
uint8_t __pad;
270+
};
271+
210272
IRQ mitigation
211273
---------------
212274

@@ -222,12 +284,78 @@ Such applications can pledge to the kernel that they will perform a busy
222284
polling operation periodically, and the driver should keep the device IRQs
223285
permanently masked. This mode is enabled by using the ``SO_PREFER_BUSY_POLL``
224286
socket option. To avoid system misbehavior the pledge is revoked
225-
if ``gro_flush_timeout`` passes without any busy poll call.
287+
if ``gro_flush_timeout`` passes without any busy poll call. For epoll-based
288+
busy polling applications, the ``prefer_busy_poll`` field of ``struct
289+
epoll_params`` can be set to 1 and the ``EPIOCSPARAMS`` ioctl can be issued to
290+
enable this mode. See the above section for more details.
226291

227292
The NAPI budget for busy polling is lower than the default (which makes
228293
sense given the low latency intention of normal busy polling). This is
229294
not the case with IRQ mitigation, however, so the budget can be adjusted
230-
with the ``SO_BUSY_POLL_BUDGET`` socket option.
295+
with the ``SO_BUSY_POLL_BUDGET`` socket option. For epoll-based busy polling
296+
applications, the ``busy_poll_budget`` field can be adjusted to the desired value
297+
in ``struct epoll_params`` and set on a specific epoll context using the ``EPIOCSPARAMS``
298+
ioctl. See the above section for more details.
299+
300+
It is important to note that choosing a large value for ``gro_flush_timeout``
301+
will defer IRQs to allow for better batch processing, but will induce latency
302+
when the system is not fully loaded. Choosing a small value for
303+
``gro_flush_timeout`` can cause interference of the user application which is
304+
attempting to busy poll by device IRQs and softirq processing. This value
305+
should be chosen carefully with these tradeoffs in mind. epoll-based busy
306+
polling applications may be able to mitigate how much user processing happens
307+
by choosing an appropriate value for ``maxevents``.
308+
309+
Users may want to consider an alternate approach, IRQ suspension, to help deal
310+
with these tradeoffs.
311+
312+
IRQ suspension
313+
--------------
314+
315+
IRQ suspension is a mechanism wherein device IRQs are masked while epoll
316+
triggers NAPI packet processing.
317+
318+
While application calls to epoll_wait successfully retrieve events, the kernel will
319+
defer the IRQ suspension timer. If the kernel does not retrieve any events
320+
while busy polling (for example, because network traffic levels subsided), IRQ
321+
suspension is disabled and the IRQ mitigation strategies described above are
322+
engaged.
323+
324+
This allows users to balance CPU consumption with network processing
325+
efficiency.
326+
327+
To use this mechanism:
328+
329+
1. The per-NAPI config parameter ``irq_suspend_timeout`` should be set to the
330+
maximum time (in nanoseconds) the application can have its IRQs
331+
suspended. This is done using netlink, as described above. This timeout
332+
serves as a safety mechanism to restart IRQ driver interrupt processing if
333+
the application has stalled. This value should be chosen so that it covers
334+
the amount of time the user application needs to process data from its
335+
call to epoll_wait, noting that applications can control how much data
336+
they retrieve by setting ``max_events`` when calling epoll_wait.
337+
338+
2. The sysfs parameter or per-NAPI config parameters ``gro_flush_timeout``
339+
and ``napi_defer_hard_irqs`` can be set to low values. They will be used
340+
to defer IRQs after busy poll has found no data.
341+
342+
3. The ``prefer_busy_poll`` flag must be set to true. This can be done using
343+
the ``EPIOCSPARAMS`` ioctl as described above.
344+
345+
4. The application uses epoll as described above to trigger NAPI packet
346+
processing.
347+
348+
As mentioned above, as long as subsequent calls to epoll_wait return events to
349+
userland, the ``irq_suspend_timeout`` is deferred and IRQs are disabled. This
350+
allows the application to process data without interference.
351+
352+
Once a call to epoll_wait results in no events being found, IRQ suspension is
353+
automatically disabled and the ``gro_flush_timeout`` and
354+
``napi_defer_hard_irqs`` mitigation mechanisms take over.
355+
356+
It is expected that ``irq_suspend_timeout`` will be set to a value much larger
357+
than ``gro_flush_timeout`` as ``irq_suspend_timeout`` should suspend IRQs for
358+
the duration of one userland processing cycle.
231359

232360
.. _threaded:
233361

0 commit comments

Comments
 (0)