diff options
| author | Boqun Feng <[email protected]> | 2025-03-26 18:08:30 +0000 |
|---|---|---|
| committer | Ingo Molnar <[email protected]> | 2025-03-27 07:23:17 +0000 |
| commit | 495f53d5cca0f939eaed9dca90b67e7e6fb0e30c (patch) | |
| tree | bb5e9244eb8ba7afadae74043ce06d1c719e18f7 /Documentation/rust/coding-guidelines.rst | |
| parent | lockdep: Fix wait context check on softirq for PREEMPT_RT (diff) | |
| download | kernel-495f53d5cca0f939eaed9dca90b67e7e6fb0e30c.tar.gz kernel-495f53d5cca0f939eaed9dca90b67e7e6fb0e30c.zip | |
locking/lockdep: Decrease nr_unused_locks if lock unused in zap_class()
Currently, when a lock class is allocated, nr_unused_locks will be
increased by 1, until it gets used: nr_unused_locks will be decreased by
1 in mark_lock(). However, one scenario is missed: a lock class may be
zapped without even being used once. This could result into a situation
that nr_unused_locks != 0 but no unused lock class is active in the
system, and when `cat /proc/lockdep_stats`, a WARN_ON() will
be triggered in a CONFIG_DEBUG_LOCKDEP=y kernel:
[...] DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused)
[...] WARNING: CPU: 41 PID: 1121 at kernel/locking/lockdep_proc.c:283 lockdep_stats_show+0xba9/0xbd0
And as a result, lockdep will be disabled after this.
Therefore, nr_unused_locks needs to be accounted correctly at
zap_class() time.
Signed-off-by: Boqun Feng <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Waiman Long <[email protected]>
Cc: [email protected]
Link: https://lore.kernel.org/r/[email protected]
Diffstat (limited to 'Documentation/rust/coding-guidelines.rst')
0 files changed, 0 insertions, 0 deletions
