Skip to content

Commit e7ba676

Browse files
firosuseJeff Kirsher
authored andcommitted
ixgbe: sync the first fragment unconditionally
In Xen environment, if Xen-swiotlb is enabled, ixgbe driver could possibly allocate a page, DMA memory buffer, for the first fragment which is not suitable for Xen-swiotlb to do DMA operations. Xen-swiotlb have to internally allocate another page for doing DMA operations. This mechanism requires syncing the data from the internal page to the page which ixgbe sends to upper network stack. However, since commit f3213d9 ("ixgbe: Update driver to make use of DMA attributes in Rx path"), the unmap operation is performed with DMA_ATTR_SKIP_CPU_SYNC. As a result, the sync is not performed. Since the sync isn't performed, the upper network stack could receive a incomplete network packet. By incomplete, it means the linear data on the first fragment(between skb->head and skb->end) is invalid. So we have to copy the data from the internal xen-swiotlb page to the page which ixgbe sends to upper network stack through the sync operation. More details from Alexander Duyck: Specifically since we are mapping the frame with DMA_ATTR_SKIP_CPU_SYNC we have to unmap with that as well. As a result a sync is not performed on an unmap and must be done manually as we skipped it for the first frag. As such we need to always sync before possibly performing a page unmap operation. Fixes: f3213d9 ("ixgbe: Update driver to make use of DMA attributes in Rx path") Signed-off-by: Firo Yang <[email protected]> Reviewed-by: Alexander Duyck <[email protected]> Tested-by: Andrew Bowers <[email protected]> Signed-off-by: Jeff Kirsher <[email protected]>
1 parent c19d034 commit e7ba676

File tree

1 file changed

+9
-7
lines changed

1 file changed

+9
-7
lines changed

drivers/net/ethernet/intel/ixgbe/ixgbe_main.c

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1825,13 +1825,7 @@ static void ixgbe_pull_tail(struct ixgbe_ring *rx_ring,
18251825
static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
18261826
struct sk_buff *skb)
18271827
{
1828-
/* if the page was released unmap it, else just sync our portion */
1829-
if (unlikely(IXGBE_CB(skb)->page_released)) {
1830-
dma_unmap_page_attrs(rx_ring->dev, IXGBE_CB(skb)->dma,
1831-
ixgbe_rx_pg_size(rx_ring),
1832-
DMA_FROM_DEVICE,
1833-
IXGBE_RX_DMA_ATTR);
1834-
} else if (ring_uses_build_skb(rx_ring)) {
1828+
if (ring_uses_build_skb(rx_ring)) {
18351829
unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK;
18361830

18371831
dma_sync_single_range_for_cpu(rx_ring->dev,
@@ -1848,6 +1842,14 @@ static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
18481842
skb_frag_size(frag),
18491843
DMA_FROM_DEVICE);
18501844
}
1845+
1846+
/* If the page was released, just unmap it. */
1847+
if (unlikely(IXGBE_CB(skb)->page_released)) {
1848+
dma_unmap_page_attrs(rx_ring->dev, IXGBE_CB(skb)->dma,
1849+
ixgbe_rx_pg_size(rx_ring),
1850+
DMA_FROM_DEVICE,
1851+
IXGBE_RX_DMA_ATTR);
1852+
}
18511853
}
18521854

18531855
/**

0 commit comments

Comments
 (0)