Skip to content

Commit 39b02d6

Browse files
vladimirolteandavem330
authored andcommitted
net/sched: taprio: don't segment unnecessarily
Improve commit 497cc00 ("taprio: Handle short intervals and large packets") to only perform segmentation when skb->len exceeds what taprio_dequeue() expects. In practice, this will make the biggest difference when a traffic class gate is always open in the schedule. This is because the max_frm_len will be U32_MAX, and such large skb->len values as Kurt reported will be sent just fine unsegmented. What I don't seem to know how to handle is how to make sure that the segmented skbs themselves are smaller than the maximum frame size given by the current queueMaxSDU[tc]. Nonetheless, we still need to drop those, otherwise the Qdisc will hang. Signed-off-by: Vladimir Oltean <[email protected]> Reviewed-by: Kurt Kanzenbach <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent 2d5e807 commit 39b02d6

File tree

1 file changed

+20
-11
lines changed

1 file changed

+20
-11
lines changed

net/sched/sch_taprio.c

Lines changed: 20 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -566,9 +566,6 @@ static int taprio_enqueue_one(struct sk_buff *skb, struct Qdisc *sch,
566566
return qdisc_drop(skb, sch, to_free);
567567
}
568568

569-
if (taprio_skb_exceeds_queue_max_sdu(sch, skb))
570-
return qdisc_drop(skb, sch, to_free);
571-
572569
qdisc_qstats_backlog_inc(sch, skb);
573570
sch->q.qlen++;
574571

@@ -593,7 +590,14 @@ static int taprio_enqueue_segmented(struct sk_buff *skb, struct Qdisc *sch,
593590
qdisc_skb_cb(segs)->pkt_len = segs->len;
594591
slen += segs->len;
595592

596-
ret = taprio_enqueue_one(segs, sch, child, to_free);
593+
/* FIXME: we should be segmenting to a smaller size
594+
* rather than dropping these
595+
*/
596+
if (taprio_skb_exceeds_queue_max_sdu(sch, segs))
597+
ret = qdisc_drop(segs, sch, to_free);
598+
else
599+
ret = taprio_enqueue_one(segs, sch, child, to_free);
600+
597601
if (ret != NET_XMIT_SUCCESS) {
598602
if (net_xmit_drop_count(ret))
599603
qdisc_qstats_drop(sch);
@@ -625,13 +629,18 @@ static int taprio_enqueue(struct sk_buff *skb, struct Qdisc *sch,
625629
if (unlikely(!child))
626630
return qdisc_drop(skb, sch, to_free);
627631

628-
/* Large packets might not be transmitted when the transmission duration
629-
* exceeds any configured interval. Therefore, segment the skb into
630-
* smaller chunks. Drivers with full offload are expected to handle
631-
* this in hardware.
632-
*/
633-
if (skb_is_gso(skb))
634-
return taprio_enqueue_segmented(skb, sch, child, to_free);
632+
if (taprio_skb_exceeds_queue_max_sdu(sch, skb)) {
633+
/* Large packets might not be transmitted when the transmission
634+
* duration exceeds any configured interval. Therefore, segment
635+
* the skb into smaller chunks. Drivers with full offload are
636+
* expected to handle this in hardware.
637+
*/
638+
if (skb_is_gso(skb))
639+
return taprio_enqueue_segmented(skb, sch, child,
640+
to_free);
641+
642+
return qdisc_drop(skb, sch, to_free);
643+
}
635644

636645
return taprio_enqueue_one(skb, sch, child, to_free);
637646
}

0 commit comments

Comments
 (0)