Skip to content

Commit 120455c

Browse files
author
Peter Zijlstra
committed
sched: Fix hotplug vs CPU bandwidth control
Since we now migrate tasks away before DYING, we should also move bandwidth unthrottle, otherwise we can gain tasks from unthrottle after we expect all tasks to be gone already. Also; it looks like the RT balancers don't respect cpu_active() and instead rely on rq->online in part, complete this. This too requires we do set_rq_offline() earlier to match the cpu_active() semantics. (The bigger patch is to convert RT to cpu_active() entirely) Since set_rq_online() is called from sched_cpu_activate(), place set_rq_offline() in sched_cpu_deactivate(). Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Valentin Schneider <[email protected]> Reviewed-by: Daniel Bristot de Oliveira <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 1cf12e0 commit 120455c

File tree

3 files changed

+12
-6
lines changed

3 files changed

+12
-6
lines changed

kernel/sched/core.c

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6977,6 +6977,8 @@ int sched_cpu_activate(unsigned int cpu)
69776977

69786978
int sched_cpu_deactivate(unsigned int cpu)
69796979
{
6980+
struct rq *rq = cpu_rq(cpu);
6981+
struct rq_flags rf;
69806982
int ret;
69816983

69826984
set_cpu_active(cpu, false);
@@ -6991,6 +6993,14 @@ int sched_cpu_deactivate(unsigned int cpu)
69916993

69926994
balance_push_set(cpu, true);
69936995

6996+
rq_lock_irqsave(rq, &rf);
6997+
if (rq->rd) {
6998+
update_rq_clock(rq);
6999+
BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span));
7000+
set_rq_offline(rq);
7001+
}
7002+
rq_unlock_irqrestore(rq, &rf);
7003+
69947004
#ifdef CONFIG_SCHED_SMT
69957005
/*
69967006
* When going down, decrement the number of cores with SMT present.
@@ -7072,10 +7082,6 @@ int sched_cpu_dying(unsigned int cpu)
70727082
sched_tick_stop(cpu);
70737083

70747084
rq_lock_irqsave(rq, &rf);
7075-
if (rq->rd) {
7076-
BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span));
7077-
set_rq_offline(rq);
7078-
}
70797085
BUG_ON(rq->nr_running != 1);
70807086
rq_unlock_irqrestore(rq, &rf);
70817087

kernel/sched/deadline.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -543,7 +543,7 @@ static int push_dl_task(struct rq *rq);
543543

544544
static inline bool need_pull_dl_task(struct rq *rq, struct task_struct *prev)
545545
{
546-
return dl_task(prev);
546+
return rq->online && dl_task(prev);
547547
}
548548

549549
static DEFINE_PER_CPU(struct callback_head, dl_push_head);

kernel/sched/rt.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -265,7 +265,7 @@ static void pull_rt_task(struct rq *this_rq);
265265
static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
266266
{
267267
/* Try to pull RT tasks here if we lower this rq's prio */
268-
return rq->rt.highest_prio.curr > prev->prio;
268+
return rq->online && rq->rt.highest_prio.curr > prev->prio;
269269
}
270270

271271
static inline int rt_overloaded(struct rq *rq)

0 commit comments

Comments
 (0)