Skip to content

Commit 66567fc

Browse files
bsegall@google.comIngo Molnar
authored andcommitted
sched/fair: Don't push cfs_bandwith slack timers forward
When a cfs_rq sleeps and returns its quota, we delay for 5ms before waking any throttled cfs_rqs to coalesce with other cfs_rqs going to sleep, as this has to be done outside of the rq lock we hold. The current code waits for 5ms without any sleeps, instead of waiting for 5ms from the first sleep, which can delay the unthrottle more than we want. Switch this around so that we can't push this forward forever. This requires an extra flag rather than using hrtimer_active, since we need to start a new timer if the current one is in the process of finishing. Signed-off-by: Ben Segall <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Xunlei Pang <[email protected]> Acked-by: Phil Auld <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent aacedf2 commit 66567fc

File tree

2 files changed

+11
-4
lines changed

2 files changed

+11
-4
lines changed

kernel/sched/fair.c

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4729,6 +4729,11 @@ static void start_cfs_slack_bandwidth(struct cfs_bandwidth *cfs_b)
47294729
if (runtime_refresh_within(cfs_b, min_left))
47304730
return;
47314731

4732+
/* don't push forwards an existing deferred unthrottle */
4733+
if (cfs_b->slack_started)
4734+
return;
4735+
cfs_b->slack_started = true;
4736+
47324737
hrtimer_start(&cfs_b->slack_timer,
47334738
ns_to_ktime(cfs_bandwidth_slack_period),
47344739
HRTIMER_MODE_REL);
@@ -4782,6 +4787,7 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
47824787

47834788
/* confirm we're still not at a refresh boundary */
47844789
raw_spin_lock_irqsave(&cfs_b->lock, flags);
4790+
cfs_b->slack_started = false;
47854791
if (cfs_b->distribute_running) {
47864792
raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
47874793
return;
@@ -4945,6 +4951,7 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
49454951
hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
49464952
cfs_b->slack_timer.function = sched_cfs_slack_timer;
49474953
cfs_b->distribute_running = 0;
4954+
cfs_b->slack_started = false;
49484955
}
49494956

49504957
static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq)

kernel/sched/sched.h

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -338,8 +338,10 @@ struct cfs_bandwidth {
338338
u64 runtime_expires;
339339
int expires_seq;
340340

341-
short idle;
342-
short period_active;
341+
u8 idle;
342+
u8 period_active;
343+
u8 distribute_running;
344+
u8 slack_started;
343345
struct hrtimer period_timer;
344346
struct hrtimer slack_timer;
345347
struct list_head throttled_cfs_rq;
@@ -348,8 +350,6 @@ struct cfs_bandwidth {
348350
int nr_periods;
349351
int nr_throttled;
350352
u64 throttled_time;
351-
352-
bool distribute_running;
353353
#endif
354354
};
355355

0 commit comments

Comments
 (0)