diff options
| author | Yong Zhang <[email protected]> | 2010-05-04 06:16:48 +0000 |
|---|---|---|
| committer | Ingo Molnar <[email protected]> | 2010-05-07 09:27:26 +0000 |
| commit | 4726f2a617ebd868a4fdeb5679613b897e5f1676 (patch) | |
| tree | c9eea44c66f98123802d99aad5b3cce93626eda8 /lib/debugobjects.c | |
| parent | lockdep: No need to disable preemption in debug atomic ops (diff) | |
| download | kernel-4726f2a617ebd868a4fdeb5679613b897e5f1676.tar.gz kernel-4726f2a617ebd868a4fdeb5679613b897e5f1676.zip | |
lockdep: Reduce stack_trace usage
When calling check_prevs_add(), if all validations passed
add_lock_to_list() will add new lock to dependency tree and
alloc stack_trace for each list_entry.
But at this time, we are always on the same stack, so stack_trace
for each list_entry has the same value. This is redundant and eats
up lots of memory which could lead to warning on low
MAX_STACK_TRACE_ENTRIES.
Use one copy of stack_trace instead.
V2: As suggested by Peter Zijlstra, move save_trace() from
check_prevs_add() to check_prev_add().
Add tracking for trylock dependence which is also redundant.
Signed-off-by: Yong Zhang <[email protected]>
Cc: David S. Miller <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'lib/debugobjects.c')
0 files changed, 0 insertions, 0 deletions
