aboutsummaryrefslogtreecommitdiffstats
path: root/mm/page_ext.c
diff options
context:
space:
mode:
authorYosry Ahmed <[email protected]>2024-06-11 02:45:15 +0000
committerAndrew Morton <[email protected]>2024-07-04 02:30:08 +0000
commit2d4d2b1cfb85cc07f6d5619acb882d8b11e55cf4 (patch)
tree15a2db9e05347bdf1a51e71f3ebbb0e6812d21c8 /mm/page_ext.c
parentmm: zswap: rename is_zswap_enabled() to zswap_is_enabled() (diff)
downloadkernel-2d4d2b1cfb85cc07f6d5619acb882d8b11e55cf4.tar.gz
kernel-2d4d2b1cfb85cc07f6d5619acb882d8b11e55cf4.zip
mm: zswap: add zswap_never_enabled()
Add zswap_never_enabled() to skip the xarray lookup in zswap_load() if zswap was never enabled on the system. It is implemented using static branches for efficiency, as enabling zswap should be a rare event. This could shave some cycles off zswap_load() when CONFIG_ZSWAP is used but zswap is never enabled. However, the real motivation behind this patch is two-fold: - Incoming large folio swapin work will need to fallback to order-0 folios if zswap was ever enabled, because any part of the folio could be in zswap, until proper handling of large folios with zswap is added. - A warning and recovery attempt will be added in a following change in case the above was not done incorrectly. Zswap will fail the read if the folio is large and it was ever enabled. Expose zswap_never_enabled() in the header for the swapin work to use it later. [[email protected]: expose zswap_never_enabled() in the header] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Yosry Ahmed <[email protected]> Reviewed-by: Nhat Pham <[email protected]> Cc: Barry Song <[email protected]> Cc: Chengming Zhou <[email protected]> Cc: Chris Li <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'mm/page_ext.c')
0 files changed, 0 insertions, 0 deletions