Skip to content

Commit c7719e7

Browse files
ftang1KAGA-KOKO
authored andcommitted
x86/tsc: Add a timer to make sure TSC_adjust is always checked
The TSC_ADJUST register is checked every time a CPU enters idle state, but Thomas Gleixner mentioned there is still a caveat that a system won't enter idle [1], either because it's too busy or configured purposely to not enter idle. Setup a periodic timer (every 10 minutes) to make sure the check is happening on a regular base. [1] https://lore.kernel.org/lkml/[email protected]/ Fixes: 6e3cd95 ("x86/hpet: Use another crystalball to evaluate HPET usability") Requested-by: Thomas Gleixner <[email protected]> Signed-off-by: Feng Tang <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected]
1 parent 52d0b8b commit c7719e7

File tree

1 file changed

+41
-0
lines changed

1 file changed

+41
-0
lines changed

arch/x86/kernel/tsc_sync.c

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ struct tsc_adjust {
3030
};
3131

3232
static DEFINE_PER_CPU(struct tsc_adjust, tsc_adjust);
33+
static struct timer_list tsc_sync_check_timer;
3334

3435
/*
3536
* TSC's on different sockets may be reset asynchronously.
@@ -77,6 +78,46 @@ void tsc_verify_tsc_adjust(bool resume)
7778
}
7879
}
7980

81+
/*
82+
* Normally the tsc_sync will be checked every time system enters idle
83+
* state, but there is still caveat that a system won't enter idle,
84+
* either because it's too busy or configured purposely to not enter
85+
* idle.
86+
*
87+
* So setup a periodic timer (every 10 minutes) to make sure the check
88+
* is always on.
89+
*/
90+
91+
#define SYNC_CHECK_INTERVAL (HZ * 600)
92+
93+
static void tsc_sync_check_timer_fn(struct timer_list *unused)
94+
{
95+
int next_cpu;
96+
97+
tsc_verify_tsc_adjust(false);
98+
99+
/* Run the check for all onlined CPUs in turn */
100+
next_cpu = cpumask_next(raw_smp_processor_id(), cpu_online_mask);
101+
if (next_cpu >= nr_cpu_ids)
102+
next_cpu = cpumask_first(cpu_online_mask);
103+
104+
tsc_sync_check_timer.expires += SYNC_CHECK_INTERVAL;
105+
add_timer_on(&tsc_sync_check_timer, next_cpu);
106+
}
107+
108+
static int __init start_sync_check_timer(void)
109+
{
110+
if (!cpu_feature_enabled(X86_FEATURE_TSC_ADJUST) || tsc_clocksource_reliable)
111+
return 0;
112+
113+
timer_setup(&tsc_sync_check_timer, tsc_sync_check_timer_fn, 0);
114+
tsc_sync_check_timer.expires = jiffies + SYNC_CHECK_INTERVAL;
115+
add_timer(&tsc_sync_check_timer);
116+
117+
return 0;
118+
}
119+
late_initcall(start_sync_check_timer);
120+
80121
static void tsc_sanitize_first_cpu(struct tsc_adjust *cur, s64 bootval,
81122
unsigned int cpu, bool bootcpu)
82123
{

0 commit comments

Comments
 (0)