diff options
| author | Fan Ni <[email protected]> | 2025-05-05 18:22:43 +0000 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2025-05-28 02:38:26 +0000 |
| commit | 7f4b6065d9a842721a04632fc219aa453d1b2f5c (patch) | |
| tree | 187d461e642e3653c03eac7b704fe1f5c6ffd240 /mm/hugetlb.c | |
| parent | mm/hugetlb: refactor unmap_hugepage_range() to take folio instead of page (diff) | |
| download | kernel-7f4b6065d9a842721a04632fc219aa453d1b2f5c.tar.gz kernel-7f4b6065d9a842721a04632fc219aa453d1b2f5c.zip | |
mm/hugetlb: refactor __unmap_hugepage_range() to take folio instead of page
The function __unmap_hugepage_range() has two kinds of users:
1) unmap_hugepage_range(), which passes in the head page of a folio.
Since unmap_hugepage_range() already takes folio and there are no other
uses of the folio struct in the function, it is natural for
__unmap_hugepage_range() to take folio also.
2) All other uses, which pass in NULL pointer.
In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
take folio.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Fan Ni <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Cc: Matthew Wilcox (Oracle) <[email protected]>
Cc: Muchun Song <[email protected]>
Cc: Sidhartha Kumar <[email protected]>
Cc: "Vishal Moola (Oracle)" <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'mm/hugetlb.c')
| -rw-r--r-- | mm/hugetlb.c | 18 |
1 files changed, 9 insertions, 9 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c339ffe05556..443b75e116cf 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5840,7 +5840,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, - struct page *ref_page, zap_flags_t zap_flags) + struct folio *folio, zap_flags_t zap_flags) { struct mm_struct *mm = vma->vm_mm; unsigned long address; @@ -5913,12 +5913,12 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, page = pte_page(pte); /* - * If a reference page is supplied, it is because a specific - * page is being unmapped, not a range. Ensure the page we - * are about to unmap is the actual page of interest. + * If a folio is supplied, it is because a specific + * folio is being unmapped, not a range. Ensure the folio we + * are about to unmap is the actual folio of interest. */ - if (ref_page) { - if (page != ref_page) { + if (folio) { + if (page_folio(page) != folio) { spin_unlock(ptl); continue; } @@ -5982,9 +5982,9 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, tlb_remove_page_size(tlb, page, huge_page_size(h)); /* - * Bail out after unmapping reference page if supplied + * If we were instructed to unmap a specific folio, we're done. */ - if (ref_page) + if (folio) break; } tlb_end_vma(tlb, vma); @@ -6059,7 +6059,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, tlb_gather_mmu(&tlb, vma->vm_mm); __unmap_hugepage_range(&tlb, vma, start, end, - &folio->page, zap_flags); + folio, zap_flags); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); |
