diff options
| author | Peter Xu <[email protected]> | 2023-06-28 21:53:06 +0000 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2023-08-18 17:12:03 +0000 |
| commit | ffe1e7861211aafe12977a3ed2f11bb6fe1e77ea (patch) | |
| tree | 2efe71d6b9efa7f95ce6e96ee1956316ebfe2bab | |
| parent | mm/hugetlb: add page_mask for hugetlb_follow_page_mask() (diff) | |
| download | kernel-ffe1e7861211aafe12977a3ed2f11bb6fe1e77ea.tar.gz kernel-ffe1e7861211aafe12977a3ed2f11bb6fe1e77ea.zip | |
mm/gup: cleanup next_page handling
The only path that doesn't use generic "**pages" handling is the gate vma.
Make it use the same path, meanwhile tune the next_page label upper to
cover "**pages" handling. This prepares for THP handling for "**pages".
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Lorenzo Stoakes <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: James Houghton <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: John Hubbard <[email protected]>
Cc: Kirill A . Shutemov <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport (IBM) <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Yang Shi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
| -rw-r--r-- | mm/gup.c | 7 |
1 files changed, 3 insertions, 4 deletions
@@ -1207,7 +1207,7 @@ static long __get_user_pages(struct mm_struct *mm, if (!vma && in_gate_area(mm, start)) { ret = get_gate_page(mm, start & PAGE_MASK, gup_flags, &vma, - pages ? &pages[i] : NULL); + pages ? &page : NULL); if (ret) goto out; ctx.page_mask = 0; @@ -1277,19 +1277,18 @@ retry: ret = PTR_ERR(page); goto out; } - - goto next_page; } else if (IS_ERR(page)) { ret = PTR_ERR(page); goto out; } +next_page: if (pages) { pages[i] = page; flush_anon_page(vma, page, start); flush_dcache_page(page); ctx.page_mask = 0; } -next_page: + page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); if (page_increm > nr_pages) page_increm = nr_pages; |
