aboutsummaryrefslogtreecommitdiffstats
path: root/mm/sparse.c
diff options
context:
space:
mode:
authorZi Yan <[email protected]>2025-03-14 22:21:13 +0000
committerAndrew Morton <[email protected]>2025-03-18 05:07:01 +0000
commitd53c78fffe7ad364397c693522ceb4d152c2aacd (patch)
treef7a56bf9d6bb2865d18daee0a395d273c6dee70d /mm/sparse.c
parentmm/filemap: use xas_try_split() in __filemap_add_folio() (diff)
downloadkernel-d53c78fffe7ad364397c693522ceb4d152c2aacd.tar.gz
kernel-d53c78fffe7ad364397c693522ceb4d152c2aacd.zip
mm/shmem: use xas_try_split() in shmem_split_large_entry()
During shmem_split_large_entry(), large swap entries are covering n slots and an order-0 folio needs to be inserted. Instead of splitting all n slots, only the 1 slot covered by the folio need to be split and the remaining n-1 shadow entries can be retained with orders ranging from 0 to n-1. This method only requires (n/XA_CHUNK_SHIFT) new xa_nodes instead of (n % XA_CHUNK_SHIFT) * (n/XA_CHUNK_SHIFT) new xa_nodes, compared to the original xas_split_alloc() + xas_split() one. For example, to split an order-9 large swap entry (assuming XA_CHUNK_SHIFT is 6), 1 xa_node is needed instead of 8. xas_try_split_min_order() is used to reduce the number of calls to xas_try_split() during split. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Zi Yan <[email protected]> Reviewed-by: Baolin Wang <[email protected]> Tested-by: Baolin Wang <[email protected]> Cc: Baolin Wang <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Kairui Song <[email protected]> Cc: Mattew Wilcox <[email protected]> Cc: Miaohe Lin <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: John Hubbard <[email protected]> Cc: Kefeng Wang <[email protected]> Cc: Kirill A. Shuemov <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Yang Shi <[email protected]> Cc: Yu Zhao <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'mm/sparse.c')
0 files changed, 0 insertions, 0 deletions