aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/amd/amdgpu/amdgpu_test.c
diff options
context:
space:
mode:
authorPeter Zijlstra <[email protected]>2019-10-01 09:18:37 +0000
committerIngo Molnar <[email protected]>2019-11-13 07:01:30 +0000
commitff51ff84d82aea5a889b85f2b9fb3aa2b8691668 (patch)
tree2f5e8e6ff1c9dd57599318f82cad15298c2841b0 /drivers/gpu/drm/amd/amdgpu/amdgpu_test.c
parentRemove VirtualBox guest shared folders filesystem (diff)
downloadkernel-ff51ff84d82aea5a889b85f2b9fb3aa2b8691668.tar.gz
kernel-ff51ff84d82aea5a889b85f2b9fb3aa2b8691668.zip
sched/core: Avoid spurious lock dependencies
While seemingly harmless, __sched_fork() does hrtimer_init(), which, when DEBUG_OBJETS, can end up doing allocations. This then results in the following lock order: rq->lock zone->lock.rlock batched_entropy_u64.lock Which in turn causes deadlocks when we do wakeups while holding that batched_entropy lock -- as the random code does. Solve this by moving __sched_fork() out from under rq->lock. This is safe because nothing there relies on rq->lock, as also evident from the other __sched_fork() callsite. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Qian Cai <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Fixes: b7d5dc21072c ("random: add a spinlock_t to struct batched_entropy") Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'drivers/gpu/drm/amd/amdgpu/amdgpu_test.c')
0 files changed, 0 insertions, 0 deletions