Skip to content

Commit 28e92f9

Browse files
committed
Merge branch 'core-rcu-2021.07.04' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu
Pull RCU updates from Paul McKenney: - Bitmap parsing support for "all" as an alias for all bits - Documentation updates - Miscellaneous fixes, including some that overlap into mm and lockdep - kvfree_rcu() updates - mem_dump_obj() updates, with acks from one of the slab-allocator maintainers - RCU NOCB CPU updates, including limited deoffloading - SRCU updates - Tasks-RCU updates - Torture-test updates * 'core-rcu-2021.07.04' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (78 commits) tasks-rcu: Make show_rcu_tasks_gp_kthreads() be static inline rcu-tasks: Make ksoftirqd provide RCU Tasks quiescent states rcu: Add missing __releases() annotation rcu: Remove obsolete rcu_read_unlock() deadlock commentary rcu: Improve comments describing RCU read-side critical sections rcu: Create an unrcu_pointer() to remove __rcu from a pointer srcu: Early test SRCU polling start rcu: Fix various typos in comments rcu/nocb: Unify timers rcu/nocb: Prepare for fine-grained deferred wakeup rcu/nocb: Only cancel nocb timer if not polling rcu/nocb: Delete bypass_timer upon nocb_gp wakeup rcu/nocb: Cancel nocb_timer upon nocb_gp wakeup rcu/nocb: Allow de-offloading rdp leader rcu/nocb: Directly call __wake_nocb_gp() from bypass timer rcu: Don't penalize priority boosting when there is nothing to boost rcu: Point to documentation of ordering guarantees rcu: Make rcu_gp_cleanup() be noinline for tracing rcu: Restrict RCU_STRICT_GRACE_PERIOD to at most four CPUs rcu: Make show_rcu_gp_kthreads() dump rcu_node structures blocking GP ...
2 parents da803f8 + 641faf1 commit 28e92f9

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+1252
-577
lines changed

Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Any code that happens after the end of a given RCU grace period is guaranteed
2121
to see the effects of all accesses prior to the beginning of that grace
2222
period that are within RCU read-side critical sections.
2323
Similarly, any code that happens before the beginning of a given RCU grace
24-
period is guaranteed to see the effects of all accesses following the end
24+
period is guaranteed to not see the effects of all accesses following the end
2525
of that grace period that are within RCU read-side critical sections.
2626

2727
Note well that RCU-sched read-side critical sections include any region
@@ -339,14 +339,14 @@ The diagram below shows the path of ordering if the leftmost
339339
leftmost ``rcu_node`` structure offlines its last CPU and if the next
340340
``rcu_node`` structure has no online CPUs).
341341

342-
.. kernel-figure:: TreeRCU-gp-init-1.svg
342+
.. kernel-figure:: TreeRCU-gp-init-2.svg
343343

344344
The final ``rcu_gp_init()`` pass through the ``rcu_node`` tree traverses
345345
breadth-first, setting each ``rcu_node`` structure's ``->gp_seq`` field
346346
to the newly advanced value from the ``rcu_state`` structure, as shown
347347
in the following diagram.
348348

349-
.. kernel-figure:: TreeRCU-gp-init-1.svg
349+
.. kernel-figure:: TreeRCU-gp-init-3.svg
350350

351351
This change will also cause each CPU's next call to
352352
``__note_gp_changes()`` to notice that a new grace period has started,

Documentation/admin-guide/kernel-parameters.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,11 @@ to change, such as less cores in the CPU list, then N and any ranges using N
7676
will also change. Use the same on a small 4 core system, and "16-N" becomes
7777
"16-3" and now the same boot input will be flagged as invalid (start > end).
7878

79+
The special case-tolerant group name "all" has a meaning of selecting all CPUs,
80+
so that "nohz_full=all" is the equivalent of "nohz_full=0-N".
81+
82+
The semantics of "N" and "all" is supported on a level of bitmaps and holds for
83+
all users of bitmap_parse().
7984

8085
This document may not be entirely up to date and comprehensive. The command
8186
"modinfo -p ${modulename}" shows a current list of all parameters of a loadable

Documentation/admin-guide/kernel-parameters.txt

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4354,6 +4354,11 @@
43544354
whole algorithm to behave better in low memory
43554355
condition.
43564356

4357+
rcutree.rcu_delay_page_cache_fill_msec= [KNL]
4358+
Set the page-cache refill delay (in milliseconds)
4359+
in response to low-memory conditions. The range
4360+
of permitted values is in the range 0:100000.
4361+
43574362
rcutree.jiffies_till_first_fqs= [KNL]
43584363
Set delay from grace-period initialization to
43594364
first attempt to force quiescent states.

include/linux/rcupdate.h

Lines changed: 36 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -315,7 +315,7 @@ static inline int rcu_read_lock_any_held(void)
315315
#define RCU_LOCKDEP_WARN(c, s) \
316316
do { \
317317
static bool __section(".data.unlikely") __warned; \
318-
if (debug_lockdep_rcu_enabled() && !__warned && (c)) { \
318+
if ((c) && debug_lockdep_rcu_enabled() && !__warned) { \
319319
__warned = true; \
320320
lockdep_rcu_suspicious(__FILE__, __LINE__, s); \
321321
} \
@@ -373,7 +373,7 @@ static inline void rcu_preempt_sleep_check(void) { }
373373
#define unrcu_pointer(p) \
374374
({ \
375375
typeof(*p) *_________p1 = (typeof(*p) *__force)(p); \
376-
rcu_check_sparse(p, __rcu); \
376+
rcu_check_sparse(p, __rcu); \
377377
((typeof(*p) __force __kernel *)(_________p1)); \
378378
})
379379

@@ -532,7 +532,12 @@ do { \
532532
* @p: The pointer to read, prior to dereferencing
533533
* @c: The conditions under which the dereference will take place
534534
*
535-
* This is the RCU-bh counterpart to rcu_dereference_check().
535+
* This is the RCU-bh counterpart to rcu_dereference_check(). However,
536+
* please note that starting in v5.0 kernels, vanilla RCU grace periods
537+
* wait for local_bh_disable() regions of code in addition to regions of
538+
* code demarked by rcu_read_lock() and rcu_read_unlock(). This means
539+
* that synchronize_rcu(), call_rcu, and friends all take not only
540+
* rcu_read_lock() but also rcu_read_lock_bh() into account.
536541
*/
537542
#define rcu_dereference_bh_check(p, c) \
538543
__rcu_dereference_check((p), (c) || rcu_read_lock_bh_held(), __rcu)
@@ -543,6 +548,11 @@ do { \
543548
* @c: The conditions under which the dereference will take place
544549
*
545550
* This is the RCU-sched counterpart to rcu_dereference_check().
551+
* However, please note that starting in v5.0 kernels, vanilla RCU grace
552+
* periods wait for preempt_disable() regions of code in addition to
553+
* regions of code demarked by rcu_read_lock() and rcu_read_unlock().
554+
* This means that synchronize_rcu(), call_rcu, and friends all take not
555+
* only rcu_read_lock() but also rcu_read_lock_sched() into account.
546556
*/
547557
#define rcu_dereference_sched_check(p, c) \
548558
__rcu_dereference_check((p), (c) || rcu_read_lock_sched_held(), \
@@ -634,6 +644,12 @@ do { \
634644
* sections, invocation of the corresponding RCU callback is deferred
635645
* until after the all the other CPUs exit their critical sections.
636646
*
647+
* In v5.0 and later kernels, synchronize_rcu() and call_rcu() also
648+
* wait for regions of code with preemption disabled, including regions of
649+
* code with interrupts or softirqs disabled. In pre-v5.0 kernels, which
650+
* define synchronize_sched(), only code enclosed within rcu_read_lock()
651+
* and rcu_read_unlock() are guaranteed to be waited for.
652+
*
637653
* Note, however, that RCU callbacks are permitted to run concurrently
638654
* with new RCU read-side critical sections. One way that this can happen
639655
* is via the following sequence of events: (1) CPU 0 enters an RCU
@@ -686,33 +702,12 @@ static __always_inline void rcu_read_lock(void)
686702
/**
687703
* rcu_read_unlock() - marks the end of an RCU read-side critical section.
688704
*
689-
* In most situations, rcu_read_unlock() is immune from deadlock.
690-
* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
691-
* is responsible for deboosting, which it does via rt_mutex_unlock().
692-
* Unfortunately, this function acquires the scheduler's runqueue and
693-
* priority-inheritance spinlocks. This means that deadlock could result
694-
* if the caller of rcu_read_unlock() already holds one of these locks or
695-
* any lock that is ever acquired while holding them.
696-
*
697-
* That said, RCU readers are never priority boosted unless they were
698-
* preempted. Therefore, one way to avoid deadlock is to make sure
699-
* that preemption never happens within any RCU read-side critical
700-
* section whose outermost rcu_read_unlock() is called with one of
701-
* rt_mutex_unlock()'s locks held. Such preemption can be avoided in
702-
* a number of ways, for example, by invoking preempt_disable() before
703-
* critical section's outermost rcu_read_lock().
704-
*
705-
* Given that the set of locks acquired by rt_mutex_unlock() might change
706-
* at any time, a somewhat more future-proofed approach is to make sure
707-
* that that preemption never happens within any RCU read-side critical
708-
* section whose outermost rcu_read_unlock() is called with irqs disabled.
709-
* This approach relies on the fact that rt_mutex_unlock() currently only
710-
* acquires irq-disabled locks.
711-
*
712-
* The second of these two approaches is best in most situations,
713-
* however, the first approach can also be useful, at least to those
714-
* developers willing to keep abreast of the set of locks acquired by
715-
* rt_mutex_unlock().
705+
* In almost all situations, rcu_read_unlock() is immune from deadlock.
706+
* In recent kernels that have consolidated synchronize_sched() and
707+
* synchronize_rcu_bh() into synchronize_rcu(), this deadlock immunity
708+
* also extends to the scheduler's runqueue and priority-inheritance
709+
* spinlocks, courtesy of the quiescent-state deferral that is carried
710+
* out when rcu_read_unlock() is invoked with interrupts disabled.
716711
*
717712
* See rcu_read_lock() for more information.
718713
*/
@@ -728,9 +723,11 @@ static inline void rcu_read_unlock(void)
728723
/**
729724
* rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section
730725
*
731-
* This is equivalent of rcu_read_lock(), but also disables softirqs.
732-
* Note that anything else that disables softirqs can also serve as
733-
* an RCU read-side critical section.
726+
* This is equivalent to rcu_read_lock(), but also disables softirqs.
727+
* Note that anything else that disables softirqs can also serve as an RCU
728+
* read-side critical section. However, please note that this equivalence
729+
* applies only to v5.0 and later. Before v5.0, rcu_read_lock() and
730+
* rcu_read_lock_bh() were unrelated.
734731
*
735732
* Note that rcu_read_lock_bh() and the matching rcu_read_unlock_bh()
736733
* must occur in the same context, for example, it is illegal to invoke
@@ -763,9 +760,12 @@ static inline void rcu_read_unlock_bh(void)
763760
/**
764761
* rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section
765762
*
766-
* This is equivalent of rcu_read_lock(), but disables preemption.
767-
* Read-side critical sections can also be introduced by anything else
768-
* that disables preemption, including local_irq_disable() and friends.
763+
* This is equivalent to rcu_read_lock(), but also disables preemption.
764+
* Read-side critical sections can also be introduced by anything else that
765+
* disables preemption, including local_irq_disable() and friends. However,
766+
* please note that the equivalence to rcu_read_lock() applies only to
767+
* v5.0 and later. Before v5.0, rcu_read_lock() and rcu_read_lock_sched()
768+
* were unrelated.
769769
*
770770
* Note that rcu_read_lock_sched() and the matching rcu_read_unlock_sched()
771771
* must occur in the same context, for example, it is illegal to invoke

include/linux/rcutiny.h

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,6 @@ static inline void rcu_irq_enter(void) { }
8686
static inline void rcu_irq_exit_irqson(void) { }
8787
static inline void rcu_irq_enter_irqson(void) { }
8888
static inline void rcu_irq_exit(void) { }
89-
static inline void rcu_irq_exit_preempt(void) { }
9089
static inline void rcu_irq_exit_check_preempt(void) { }
9190
#define rcu_is_idle_cpu(cpu) \
9291
(is_idle_task(current) && !in_nmi() && !in_irq() && !in_serving_softirq())

include/linux/rcutree.h

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,6 @@ void rcu_idle_enter(void);
4949
void rcu_idle_exit(void);
5050
void rcu_irq_enter(void);
5151
void rcu_irq_exit(void);
52-
void rcu_irq_exit_preempt(void);
5352
void rcu_irq_enter_irqson(void);
5453
void rcu_irq_exit_irqson(void);
5554
bool rcu_is_idle_cpu(int cpu);

include/linux/srcu.h

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,12 @@ unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp);
6464
unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp);
6565
bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie);
6666

67+
#ifdef CONFIG_SRCU
68+
void srcu_init(void);
69+
#else /* #ifdef CONFIG_SRCU */
70+
static inline void srcu_init(void) { }
71+
#endif /* #else #ifdef CONFIG_SRCU */
72+
6773
#ifdef CONFIG_DEBUG_LOCK_ALLOC
6874

6975
/**

include/linux/srcutree.h

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -82,9 +82,7 @@ struct srcu_struct {
8282
/* callback for the barrier */
8383
/* operation. */
8484
struct delayed_work work;
85-
#ifdef CONFIG_DEBUG_LOCK_ALLOC
8685
struct lockdep_map dep_map;
87-
#endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
8886
};
8987

9088
/* Values for state variable (bottom bits of ->srcu_gp_seq). */

include/linux/timer.h

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -192,8 +192,6 @@ extern int try_to_del_timer_sync(struct timer_list *timer);
192192

193193
#define del_singleshot_timer_sync(t) del_timer_sync(t)
194194

195-
extern bool timer_curr_running(struct timer_list *timer);
196-
197195
extern void init_timers(void);
198196
struct hrtimer;
199197
extern enum hrtimer_restart it_real_fn(struct hrtimer *);

include/trace/events/rcu.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -278,6 +278,7 @@ TRACE_EVENT_RCU(rcu_exp_funnel_lock,
278278
* "WakeNot": Don't wake rcuo kthread.
279279
* "WakeNotPoll": Don't wake rcuo kthread because it is polling.
280280
* "WakeOvfIsDeferred": Wake rcuo kthread later, CB list is huge.
281+
* "WakeBypassIsDeferred": Wake rcuo kthread later, bypass list is contended.
281282
* "WokeEmpty": rcuo CB kthread woke to find empty list.
282283
*/
283284
TRACE_EVENT_RCU(rcu_nocb_wake,

0 commit comments

Comments
 (0)