diff options
| author | Davidlohr Bueso <[email protected]> | 2020-09-07 01:33:26 +0000 |
|---|---|---|
| committer | Steven Rostedt (VMware) <[email protected]> | 2020-09-22 01:06:02 +0000 |
| commit | 40d14da383670db21a09e63d52db8dee9b77741e (patch) | |
| tree | 4913385409f94966a7932c13a3c09f21c0109f2d /fs/proc/array.c | |
| parent | tracing: remove a pointless assignment (diff) | |
| download | kernel-40d14da383670db21a09e63d52db8dee9b77741e.tar.gz kernel-40d14da383670db21a09e63d52db8dee9b77741e.zip | |
fgraph: Convert ret_stack tasklist scanning to rcu
It seems that alloc_retstack_tasklist() can also take a lockless
approach for scanning the tasklist, instead of using the big global
tasklist_lock. For this we also kill another deprecated and rcu-unsafe
tsk->thread_group user replacing it with for_each_process_thread(),
maintaining semantics.
Here tasklist_lock does not protect anything other than the list
against concurrent fork/exit. And considering that the whole thing
is capped by FTRACE_RETSTACK_ALLOC_SIZE (32), it should not be a
problem to have a pontentially stale, yet stable, list. The task cannot
go away either, so we don't risk racing with ftrace_graph_exit_task()
which clears the retstack.
The tsk->ret_stack management is not protected by tasklist_lock, being
serialized with the corresponding publish/subscribe barriers against
concurrent ftrace_push_return_trace(). In addition this plays nicer
with cachelines by avoiding two atomic ops in the uncontended case.
Link: https://lkml.kernel.org/r/[email protected]
Acked-by: Oleg Nesterov <[email protected]>
Signed-off-by: Davidlohr Bueso <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
Diffstat (limited to 'fs/proc/array.c')
0 files changed, 0 insertions, 0 deletions
