diff options
| author | Oleg Nesterov <[email protected]> | 2024-01-22 15:50:50 +0000 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2024-02-08 05:20:32 +0000 |
| commit | daa694e4137571b4ebec330f9a9b4d54aa8b8089 (patch) | |
| tree | c5455af304fd2066d7b44e9ca6279aa6f7520a32 /fs/proc/array.c | |
| parent | mm: hugetlb pages should not be reserved by shmat() if SHM_NORESERVE (diff) | |
| download | kernel-daa694e4137571b4ebec330f9a9b4d54aa8b8089.tar.gz kernel-daa694e4137571b4ebec330f9a9b4d54aa8b8089.zip | |
getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand()
Patch series "getrusage: use sig->stats_lock", v2.
This patch (of 2):
thread_group_cputime() does its own locking, we can safely shift
thread_group_cputime_adjusted() which does another for_each_thread loop
outside of ->siglock protected section.
This is also preparation for the next patch which changes getrusage() to
use stats_lock instead of siglock, thread_group_cputime() takes the same
lock. With the current implementation recursive read_seqbegin_or_lock()
is fine, thread_group_cputime() can't enter the slow mode if the caller
holds stats_lock, yet this looks more safe and better performance-wise.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Oleg Nesterov <[email protected]>
Reported-by: Dylan Hatch <[email protected]>
Tested-by: Dylan Hatch <[email protected]>
Cc: Eric W. Biederman <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'fs/proc/array.c')
0 files changed, 0 insertions, 0 deletions
