aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/filesystems/caching/netfs-api.txt
diff options
context:
space:
mode:
authorVlastimil Babka <[email protected]>2017-09-06 23:20:51 +0000
committerLinus Torvalds <[email protected]>2017-09-07 00:27:26 +0000
commit10903027948d768d9639b31e9a555802e2dabafc (patch)
tree33a15c25e6bc0f687dcce1b9e7bf8675c7eaa524 /Documentation/filesystems/caching/netfs-api.txt
parentmm, page_ext: periodically reschedule during page_ext_init() (diff)
downloadkernel-10903027948d768d9639b31e9a555802e2dabafc.tar.gz
kernel-10903027948d768d9639b31e9a555802e2dabafc.zip
mm, page_owner: don't grab zone->lock for init_pages_in_zone()
init_pages_in_zone() is run under zone->lock, which means a long lock time and disabled interrupts on large machines. This is currently not an issue since it runs early in boot, but a later patch will change that. However, like other pfn scanners, we don't actually need zone->lock even when other cpus are running. The only potentially dangerous operation here is reading bogus buddy page owner due to race, and we already know how to handle that. The worst that can happen is that we skip some early allocated pages, which should not affect the debugging power of page_owner noticeably. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Yang Shi <[email protected]> Cc: Laura Abbott <[email protected]> Cc: Vinayak Menon <[email protected]> Cc: zhong jiang <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
Diffstat (limited to 'Documentation/filesystems/caching/netfs-api.txt')
0 files changed, 0 insertions, 0 deletions