aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/omapdrm/omap_gem.c
diff options
context:
space:
mode:
authorMel Gorman <[email protected]>2017-02-24 22:56:32 +0000
committerLinus Torvalds <[email protected]>2017-02-25 01:46:54 +0000
commit0ccce3b924212e121503619df97cc0f17189b77b (patch)
tree8f365e995db4d0dd9cc0735750376c8866f279ba /drivers/gpu/drm/omapdrm/omap_gem.c
parentmm, page_alloc: split alloc_pages_nodemask() (diff)
downloadkernel-0ccce3b924212e121503619df97cc0f17189b77b.tar.gz
kernel-0ccce3b924212e121503619df97cc0f17189b77b.zip
mm, page_alloc: drain per-cpu pages from workqueue context
The per-cpu page allocator can be drained immediately via drain_all_pages() which sends IPIs to every CPU. In the next patch, the per-cpu allocator will only be used for interrupt-safe allocations which prevents draining it from IPI context. This patch uses workqueues to drain the per-cpu lists instead. This is slower but no slowdown during intensive reclaim was measured and the paths that use drain_all_pages() are not that sensitive to performance. This is particularly true as the path would only be triggered when reclaim is failing. It also makes a some sense to avoid storming a machine with IPIs when it's under memory pressure. Arguably, it should be further adjusted so that only one caller at a time is draining pages but it's beyond the scope of the current patch. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mel Gorman <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Hillf Danton <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Cc: Tejun Heo <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
Diffstat (limited to 'drivers/gpu/drm/omapdrm/omap_gem.c')
0 files changed, 0 insertions, 0 deletions