Skip to content

Commit 7cd806a

Browse files
htejunaxboe
authored andcommitted
iocost: improve nr_lagging handling
Some IOs may span multiple periods. As latencies are collected on completion, the inbetween periods won't register them and may incorrectly decide to increase vrate. nr_lagging tracks these IOs to avoid those situations. Currently, whenever there are IOs which are spanning from the previous period, busy_level is reset to 0 if negative thus suppressing vrate increase. This has the following two problems. * When latency target percentiles aren't set, vrate adjustment should only be governed by queue depth depletion; however, the current code keeps nr_lagging active which pulls in latency results and can keep down vrate unexpectedly. * When lagging condition is detected, it resets the entire negative busy_level. This turned out to be way too aggressive on some devices which sometimes experience extended latencies on a small subset of commands. In addition, a lagging IO will be accounted as latency target miss on completion anyway and resetting busy_level amplifies its impact unnecessarily. This patch fixes the above two problems by disabling nr_lagging counting when latency target percentiles aren't set and blocking vrate increases when there are lagging IOs while leaving busy_level as-is. Signed-off-by: Tejun Heo <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
1 parent 25d41e4 commit 7cd806a

File tree

1 file changed

+11
-8
lines changed

1 file changed

+11
-8
lines changed

block/blk-iocost.c

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1407,7 +1407,8 @@ static void ioc_timer_fn(struct timer_list *timer)
14071407
* comparing vdone against period start. If lagging behind
14081408
* IOs from past periods, don't increase vrate.
14091409
*/
1410-
if (!atomic_read(&iocg_to_blkg(iocg)->use_delay) &&
1410+
if ((ppm_rthr != MILLION || ppm_wthr != MILLION) &&
1411+
!atomic_read(&iocg_to_blkg(iocg)->use_delay) &&
14111412
time_after64(vtime, vdone) &&
14121413
time_after64(vtime, now.vnow -
14131414
MAX_LAGGING_PERIODS * period_vtime) &&
@@ -1537,21 +1538,23 @@ static void ioc_timer_fn(struct timer_list *timer)
15371538
missed_ppm[WRITE] > ppm_wthr) {
15381539
ioc->busy_level = max(ioc->busy_level, 0);
15391540
ioc->busy_level++;
1540-
} else if (nr_lagging) {
1541-
ioc->busy_level = max(ioc->busy_level, 0);
1542-
} else if (nr_shortages && !nr_surpluses &&
1543-
rq_wait_pct <= RQ_WAIT_BUSY_PCT * UNBUSY_THR_PCT / 100 &&
1541+
} else if (rq_wait_pct <= RQ_WAIT_BUSY_PCT * UNBUSY_THR_PCT / 100 &&
15441542
missed_ppm[READ] <= ppm_rthr * UNBUSY_THR_PCT / 100 &&
15451543
missed_ppm[WRITE] <= ppm_wthr * UNBUSY_THR_PCT / 100) {
1546-
ioc->busy_level = min(ioc->busy_level, 0);
1547-
ioc->busy_level--;
1544+
/* take action iff there is contention */
1545+
if (nr_shortages && !nr_lagging) {
1546+
ioc->busy_level = min(ioc->busy_level, 0);
1547+
/* redistribute surpluses first */
1548+
if (!nr_surpluses)
1549+
ioc->busy_level--;
1550+
}
15481551
} else {
15491552
ioc->busy_level = 0;
15501553
}
15511554

15521555
ioc->busy_level = clamp(ioc->busy_level, -1000, 1000);
15531556

1554-
if (ioc->busy_level) {
1557+
if (ioc->busy_level > 0 || (ioc->busy_level < 0 && !nr_lagging)) {
15551558
u64 vrate = atomic64_read(&ioc->vtime_rate);
15561559
u64 vrate_min = ioc->vrate_min, vrate_max = ioc->vrate_max;
15571560

0 commit comments

Comments
 (0)