Skip to content

Commit 1900e30

Browse files
robhancocksedkuba-moo
authored andcommitted
net: macb: simplify/cleanup NAPI reschedule checking
Previously the macb_poll method was checking the RSR register after completing its RX receive work to see if additional packets had been received since IRQs were disabled, since this controller does not maintain the pending IRQ status across IRQ disable. It also had to double-check the register after re-enabling IRQs to detect if packets were received after the first check but before IRQs were enabled. Using the RSR register for this purpose is problematic since it reflects the global device state rather than the per-queue state, so if packets are being received on multiple queues it may end up retriggering receive on a queue where the packets did not actually arrive and not on the one where they did arrive. This will also cause problems with an upcoming change to use NAPI for the TX path where use of multiple queues is more likely. Add a macb_rx_pending function to check the RX ring to see if more packets have arrived in the queue, and use that to check if NAPI should be rescheduled rather than the RSR register. By doing this, we can just ignore the global RSR register entirely, and thus save some extra device register accesses at the same time. This also makes the previous first check for pending packets rather redundant, since it would be checking the RX ring state which was just checked in the receive work function. Therefore we can get rid of it and just check after enabling interrupts whether packets are already pending. Signed-off-by: Robert Hancock <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
1 parent 11ecf34 commit 1900e30

File tree

1 file changed

+31
-34
lines changed

1 file changed

+31
-34
lines changed

drivers/net/ethernet/cadence/macb_main.c

Lines changed: 31 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -1554,54 +1554,51 @@ static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
15541554
return received;
15551555
}
15561556

1557+
static bool macb_rx_pending(struct macb_queue *queue)
1558+
{
1559+
struct macb *bp = queue->bp;
1560+
unsigned int entry;
1561+
struct macb_dma_desc *desc;
1562+
1563+
entry = macb_rx_ring_wrap(bp, queue->rx_tail);
1564+
desc = macb_rx_desc(queue, entry);
1565+
1566+
/* Make hw descriptor updates visible to CPU */
1567+
rmb();
1568+
1569+
return (desc->addr & MACB_BIT(RX_USED)) != 0;
1570+
}
1571+
15571572
static int macb_poll(struct napi_struct *napi, int budget)
15581573
{
15591574
struct macb_queue *queue = container_of(napi, struct macb_queue, napi);
15601575
struct macb *bp = queue->bp;
15611576
int work_done;
1562-
u32 status;
15631577

1564-
status = macb_readl(bp, RSR);
1565-
macb_writel(bp, RSR, status);
1578+
work_done = bp->macbgem_ops.mog_rx(queue, napi, budget);
15661579

1567-
netdev_vdbg(bp->dev, "poll: status = %08lx, budget = %d\n",
1568-
(unsigned long)status, budget);
1580+
netdev_vdbg(bp->dev, "poll: queue = %u, work_done = %d, budget = %d\n",
1581+
(unsigned int)(queue - bp->queues), work_done, budget);
15691582

1570-
work_done = bp->macbgem_ops.mog_rx(queue, napi, budget);
1571-
if (work_done < budget) {
1572-
napi_complete_done(napi, work_done);
1583+
if (work_done < budget && napi_complete_done(napi, work_done)) {
1584+
queue_writel(queue, IER, bp->rx_intr_mask);
15731585

1574-
/* RSR bits only seem to propagate to raise interrupts when
1575-
* interrupts are enabled at the time, so if bits are already
1576-
* set due to packets received while interrupts were disabled,
1586+
/* Packet completions only seem to propagate to raise
1587+
* interrupts when interrupts are enabled at the time, so if
1588+
* packets were received while interrupts were disabled,
15771589
* they will not cause another interrupt to be generated when
15781590
* interrupts are re-enabled.
1579-
* Check for this case here. This has been seen to happen
1580-
* around 30% of the time under heavy network load.
1591+
* Check for this case here to avoid losing a wakeup. This can
1592+
* potentially race with the interrupt handler doing the same
1593+
* actions if an interrupt is raised just after enabling them,
1594+
* but this should be harmless.
15811595
*/
1582-
status = macb_readl(bp, RSR);
1583-
if (status) {
1596+
if (macb_rx_pending(queue)) {
1597+
queue_writel(queue, IDR, bp->rx_intr_mask);
15841598
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
15851599
queue_writel(queue, ISR, MACB_BIT(RCOMP));
1586-
napi_reschedule(napi);
1587-
} else {
1588-
queue_writel(queue, IER, bp->rx_intr_mask);
1589-
1590-
/* In rare cases, packets could have been received in
1591-
* the window between the check above and re-enabling
1592-
* interrupts. Therefore, a double-check is required
1593-
* to avoid losing a wakeup. This can potentially race
1594-
* with the interrupt handler doing the same actions
1595-
* if an interrupt is raised just after enabling them,
1596-
* but this should be harmless.
1597-
*/
1598-
status = macb_readl(bp, RSR);
1599-
if (unlikely(status)) {
1600-
queue_writel(queue, IDR, bp->rx_intr_mask);
1601-
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
1602-
queue_writel(queue, ISR, MACB_BIT(RCOMP));
1603-
napi_schedule(napi);
1604-
}
1600+
netdev_vdbg(bp->dev, "poll: packets pending, reschedule\n");
1601+
napi_schedule(napi);
16051602
}
16061603
}
16071604

0 commit comments

Comments
 (0)