diff options
| author | Sidhartha Kumar <[email protected]> | 2025-05-28 19:20:13 +0000 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2025-07-10 05:41:54 +0000 |
| commit | cdf48aa83279d4369ec6195f716468950c4440ca (patch) | |
| tree | 654af5d92620431fd21a17b76d59ca682ea1537b /mm/hugetlb.c | |
| parent | tools/testing/selftests: add VMA merge tests for KSM merge (diff) | |
| download | kernel-cdf48aa83279d4369ec6195f716468950c4440ca.tar.gz kernel-cdf48aa83279d4369ec6195f716468950c4440ca.zip | |
mm/hugetlb: convert hugetlb_change_protection() to folios
The for loop inside hugetlb_change_protection() increments by the huge
page size:
psize = huge_page_size(h);
for (; address < end; address += psize)
so we are operating on the head page of the huge pages between address and
end. We can safely convert the struct page usage to struct folio.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Sidhartha Kumar <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Cc: Muchun Song <[email protected]>
Cc: Sidhartha Kumar <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'mm/hugetlb.c')
| -rw-r--r-- | mm/hugetlb.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9dc95eac558c..7a7df0b2a561 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7166,11 +7166,11 @@ long hugetlb_change_protection(struct vm_area_struct *vma, /* Nothing to do. */ } else if (unlikely(is_hugetlb_entry_migration(pte))) { swp_entry_t entry = pte_to_swp_entry(pte); - struct page *page = pfn_swap_entry_to_page(entry); + struct folio *folio = pfn_swap_entry_folio(entry); pte_t newpte = pte; if (is_writable_migration_entry(entry)) { - if (PageAnon(page)) + if (folio_test_anon(folio)) entry = make_readable_exclusive_migration_entry( swp_offset(entry)); else |
