Skip to content

Commit 0598e4f

Browse files
committed
ftrace: Add use of synchronize_rcu_tasks() with dynamic trampolines
The function tracer needs to be more careful than other subsystems when it comes to freeing data. Especially if that data is actually executable code. When a single function is traced, a trampoline can be dynamically allocated which is called to jump to the function trace callback. When the callback is no longer needed, the dynamic allocated trampoline needs to be freed. This is where the issues arise. The dynamically allocated trampoline must not be used again. As function tracing can trace all subsystems, including subsystems that are used to serialize aspects of freeing (namely RCU), it must take extra care when doing the freeing. Before synchronize_rcu_tasks() was around, there was no way for the function tracer to know that nothing was using the dynamically allocated trampoline when CONFIG_PREEMPT was enabled. That's because a task could be indefinitely preempted while sitting on the trampoline. Now with synchronize_rcu_tasks(), it will wait till all tasks have either voluntarily scheduled (not on the trampoline) or goes into userspace (not on the trampoline). Then it is safe to free the trampoline even with CONFIG_PREEMPT set. Acked-by: "Paul E. McKenney" <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
1 parent 696ced4 commit 0598e4f

File tree

2 files changed

+20
-25
lines changed

2 files changed

+20
-25
lines changed

kernel/trace/Kconfig

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,8 @@ config FUNCTION_TRACER
134134
select KALLSYMS
135135
select GENERIC_TRACER
136136
select CONTEXT_SWITCH_TRACER
137-
select GLOB
137+
select GLOB
138+
select TASKS_RCU if PREEMPT
138139
help
139140
Enable the kernel to trace every kernel function. This is done
140141
by using a compiler feature to insert a small, 5-byte No-Operation

kernel/trace/ftrace.c

Lines changed: 18 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -2808,18 +2808,28 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command)
28082808
* callers are done before leaving this function.
28092809
* The same goes for freeing the per_cpu data of the per_cpu
28102810
* ops.
2811-
*
2812-
* Again, normal synchronize_sched() is not good enough.
2813-
* We need to do a hard force of sched synchronization.
2814-
* This is because we use preempt_disable() to do RCU, but
2815-
* the function tracers can be called where RCU is not watching
2816-
* (like before user_exit()). We can not rely on the RCU
2817-
* infrastructure to do the synchronization, thus we must do it
2818-
* ourselves.
28192811
*/
28202812
if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_PER_CPU)) {
2813+
/*
2814+
* We need to do a hard force of sched synchronization.
2815+
* This is because we use preempt_disable() to do RCU, but
2816+
* the function tracers can be called where RCU is not watching
2817+
* (like before user_exit()). We can not rely on the RCU
2818+
* infrastructure to do the synchronization, thus we must do it
2819+
* ourselves.
2820+
*/
28212821
schedule_on_each_cpu(ftrace_sync);
28222822

2823+
/*
2824+
* When the kernel is preeptive, tasks can be preempted
2825+
* while on a ftrace trampoline. Just scheduling a task on
2826+
* a CPU is not good enough to flush them. Calling
2827+
* synchornize_rcu_tasks() will wait for those tasks to
2828+
* execute and either schedule voluntarily or enter user space.
2829+
*/
2830+
if (IS_ENABLED(CONFIG_PREEMPT))
2831+
synchronize_rcu_tasks();
2832+
28232833
arch_ftrace_trampoline_free(ops);
28242834

28252835
if (ops->flags & FTRACE_OPS_FL_PER_CPU)
@@ -5366,22 +5376,6 @@ void __weak arch_ftrace_update_trampoline(struct ftrace_ops *ops)
53665376

53675377
static void ftrace_update_trampoline(struct ftrace_ops *ops)
53685378
{
5369-
5370-
/*
5371-
* Currently there's no safe way to free a trampoline when the kernel
5372-
* is configured with PREEMPT. That is because a task could be preempted
5373-
* when it jumped to the trampoline, it may be preempted for a long time
5374-
* depending on the system load, and currently there's no way to know
5375-
* when it will be off the trampoline. If the trampoline is freed
5376-
* too early, when the task runs again, it will be executing on freed
5377-
* memory and crash.
5378-
*/
5379-
#ifdef CONFIG_PREEMPT
5380-
/* Currently, only non dynamic ops can have a trampoline */
5381-
if (ops->flags & FTRACE_OPS_FL_DYNAMIC)
5382-
return;
5383-
#endif
5384-
53855379
arch_ftrace_update_trampoline(ops);
53865380
}
53875381

0 commit comments

Comments
 (0)