Skip to content

Commit 36e3b94

Browse files
baileyforrestkuba-moo
authored andcommitted
gve: Fix an edge case for TSO skb validity check
The NIC requires each TSO segment to not span more than 10 descriptors. NIC further requires each descriptor to not exceed 16KB - 1 (GVE_TX_MAX_BUF_SIZE_DQO). The descriptors for an skb are generated by gve_tx_add_skb_no_copy_dqo() for DQO RDA queue format. gve_tx_add_skb_no_copy_dqo() loops through each skb frag and generates a descriptor for the entire frag if the frag size is not greater than GVE_TX_MAX_BUF_SIZE_DQO. If the frag size is greater than GVE_TX_MAX_BUF_SIZE_DQO, it is split into descriptor(s) of size GVE_TX_MAX_BUF_SIZE_DQO and a descriptor is generated for the remainder (frag size % GVE_TX_MAX_BUF_SIZE_DQO). gve_can_send_tso() checks if the descriptors thus generated for an skb would meet the requirement that each TSO-segment not span more than 10 descriptors. However, the current code misses an edge case when a TSO segment spans multiple descriptors within a large frag. This change fixes the edge case. gve_can_send_tso() relies on the assumption that max gso size (9728) is less than GVE_TX_MAX_BUF_SIZE_DQO and therefore within an skb fragment a TSO segment can never span more than 2 descriptors. Fixes: a57e5de ("gve: DQO: Add TX path") Signed-off-by: Praveen Kaligineedi <[email protected]> Signed-off-by: Bailey Forrest <[email protected]> Reviewed-by: Jeroen de Borst <[email protected]> Cc: [email protected] Reviewed-by: Willem de Bruijn <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
1 parent b537633 commit 36e3b94

File tree

1 file changed

+21
-1
lines changed

1 file changed

+21
-1
lines changed

drivers/net/ethernet/google/gve/gve_tx_dqo.c

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -866,22 +866,42 @@ static bool gve_can_send_tso(const struct sk_buff *skb)
866866
const int header_len = skb_tcp_all_headers(skb);
867867
const int gso_size = shinfo->gso_size;
868868
int cur_seg_num_bufs;
869+
int prev_frag_size;
869870
int cur_seg_size;
870871
int i;
871872

872873
cur_seg_size = skb_headlen(skb) - header_len;
874+
prev_frag_size = skb_headlen(skb);
873875
cur_seg_num_bufs = cur_seg_size > 0;
874876

875877
for (i = 0; i < shinfo->nr_frags; i++) {
876878
if (cur_seg_size >= gso_size) {
877879
cur_seg_size %= gso_size;
878880
cur_seg_num_bufs = cur_seg_size > 0;
881+
882+
if (prev_frag_size > GVE_TX_MAX_BUF_SIZE_DQO) {
883+
int prev_frag_remain = prev_frag_size %
884+
GVE_TX_MAX_BUF_SIZE_DQO;
885+
886+
/* If the last descriptor of the previous frag
887+
* is less than cur_seg_size, the segment will
888+
* span two descriptors in the previous frag.
889+
* Since max gso size (9728) is less than
890+
* GVE_TX_MAX_BUF_SIZE_DQO, it is impossible
891+
* for the segment to span more than two
892+
* descriptors.
893+
*/
894+
if (prev_frag_remain &&
895+
cur_seg_size > prev_frag_remain)
896+
cur_seg_num_bufs++;
897+
}
879898
}
880899

881900
if (unlikely(++cur_seg_num_bufs > max_bufs_per_seg))
882901
return false;
883902

884-
cur_seg_size += skb_frag_size(&shinfo->frags[i]);
903+
prev_frag_size = skb_frag_size(&shinfo->frags[i]);
904+
cur_seg_size += prev_frag_size;
885905
}
886906

887907
return true;

0 commit comments

Comments
 (0)