Skip to content

Commit b537633

Browse files
TaeheeYookuba-moo
authored andcommitted
bnxt_en: update xdp_rxq_info in queue restart logic
When the netdev_rx_queue_restart() restarts queues, the bnxt_en driver updates(creates and deletes) a page_pool. But it doesn't update xdp_rxq_info, so the xdp_rxq_info is still connected to an old page_pool. So, bnxt_rx_ring_info->page_pool indicates a new page_pool, but bnxt_rx_ring_info->xdp_rxq is still connected to an old page_pool. An old page_pool is no longer used so it is supposed to be deleted by page_pool_destroy() but it isn't. Because the xdp_rxq_info is holding the reference count for it and the xdp_rxq_info is not updated, an old page_pool will not be deleted in the queue restart logic. Before restarting 1 queue: ./tools/net/ynl/samples/page-pool enp10s0f1np1[6] page pools: 4 (zombies: 0) refs: 8192 bytes: 33554432 (refs: 0 bytes: 0) recycling: 0.0% (alloc: 128:8048 recycle: 0:0) After restarting 1 queue: ./tools/net/ynl/samples/page-pool enp10s0f1np1[6] page pools: 5 (zombies: 0) refs: 10240 bytes: 41943040 (refs: 0 bytes: 0) recycling: 20.0% (alloc: 160:10080 recycle: 1920:128) Before restarting queues, an interface has 4 page_pools. After restarting one queue, an interface has 5 page_pools, but it should be 4, not 5. The reason is that queue restarting logic creates a new page_pool and an old page_pool is not deleted due to the absence of an update of xdp_rxq_info logic. Fixes: 2d694c2 ("bnxt_en: implement netdev_queue_mgmt_ops") Signed-off-by: Taehee Yoo <[email protected]> Reviewed-by: David Wei <[email protected]> Reviewed-by: Somnath Kotur <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
1 parent f7578df commit b537633

File tree

1 file changed

+17
-0
lines changed
  • drivers/net/ethernet/broadcom/bnxt

1 file changed

+17
-0
lines changed

drivers/net/ethernet/broadcom/bnxt/bnxt.c

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4052,6 +4052,7 @@ static void bnxt_reset_rx_ring_struct(struct bnxt *bp,
40524052

40534053
rxr->page_pool->p.napi = NULL;
40544054
rxr->page_pool = NULL;
4055+
memset(&rxr->xdp_rxq, 0, sizeof(struct xdp_rxq_info));
40554056

40564057
ring = &rxr->rx_ring_struct;
40574058
rmem = &ring->ring_mem;
@@ -15018,6 +15019,16 @@ static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
1501815019
if (rc)
1501915020
return rc;
1502015021

15022+
rc = xdp_rxq_info_reg(&clone->xdp_rxq, bp->dev, idx, 0);
15023+
if (rc < 0)
15024+
goto err_page_pool_destroy;
15025+
15026+
rc = xdp_rxq_info_reg_mem_model(&clone->xdp_rxq,
15027+
MEM_TYPE_PAGE_POOL,
15028+
clone->page_pool);
15029+
if (rc)
15030+
goto err_rxq_info_unreg;
15031+
1502115032
ring = &clone->rx_ring_struct;
1502215033
rc = bnxt_alloc_ring(bp, &ring->ring_mem);
1502315034
if (rc)
@@ -15047,6 +15058,9 @@ static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
1504715058
bnxt_free_ring(bp, &clone->rx_agg_ring_struct.ring_mem);
1504815059
err_free_rx_ring:
1504915060
bnxt_free_ring(bp, &clone->rx_ring_struct.ring_mem);
15061+
err_rxq_info_unreg:
15062+
xdp_rxq_info_unreg(&clone->xdp_rxq);
15063+
err_page_pool_destroy:
1505015064
clone->page_pool->p.napi = NULL;
1505115065
page_pool_destroy(clone->page_pool);
1505215066
clone->page_pool = NULL;
@@ -15062,6 +15076,8 @@ static void bnxt_queue_mem_free(struct net_device *dev, void *qmem)
1506215076
bnxt_free_one_rx_ring(bp, rxr);
1506315077
bnxt_free_one_rx_agg_ring(bp, rxr);
1506415078

15079+
xdp_rxq_info_unreg(&rxr->xdp_rxq);
15080+
1506515081
page_pool_destroy(rxr->page_pool);
1506615082
rxr->page_pool = NULL;
1506715083

@@ -15145,6 +15161,7 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
1514515161
rxr->rx_sw_agg_prod = clone->rx_sw_agg_prod;
1514615162
rxr->rx_next_cons = clone->rx_next_cons;
1514715163
rxr->page_pool = clone->page_pool;
15164+
rxr->xdp_rxq = clone->xdp_rxq;
1514815165

1514915166
bnxt_copy_rx_ring(bp, rxr, clone);
1515015167

0 commit comments

Comments
 (0)