diff options
| author | Mike Kravetz <[email protected]> | 2023-01-04 00:27:32 +0000 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2023-01-19 01:12:55 +0000 |
| commit | e9adcfecf572fcfaa9f8525904cf49c709974f73 (patch) | |
| tree | a80baa77042b0cafac34b8953541966b60794b6f /net/ipv4/tcp.c | |
| parent | mm/kasan: simplify and refine kasan_cache code (diff) | |
| download | kernel-e9adcfecf572fcfaa9f8525904cf49c709974f73.tar.gz kernel-e9adcfecf572fcfaa9f8525904cf49c709974f73.zip | |
mm: remove zap_page_range and create zap_vma_pages
zap_page_range was originally designed to unmap pages within an address
range that could span multiple vmas. While working on [1], it was
discovered that all callers of zap_page_range pass a range entirely within
a single vma. In addition, the mmu notification call within zap_page
range does not correctly handle ranges that span multiple vmas. When
crossing a vma boundary, a new mmu_notifier_range_init/end call pair with
the new vma should be made.
Instead of fixing zap_page_range, do the following:
- Create a new routine zap_vma_pages() that will remove all pages within
the passed vma. Most users of zap_page_range pass the entire vma and
can use this new routine.
- For callers of zap_page_range not passing the entire vma, instead call
zap_page_range_single().
- Remove zap_page_range.
[1] https://lore.kernel.org/linux-mm/[email protected]/
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Mike Kravetz <[email protected]>
Suggested-by: Peter Xu <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Peter Xu <[email protected]>
Acked-by: Heiko Carstens <[email protected]> [s390]
Reviewed-by: Christoph Hellwig <[email protected]>
Cc: Christian Borntraeger <[email protected]>
Cc: Christian Brauner <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nadav Amit <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'net/ipv4/tcp.c')
| -rw-r--r-- | net/ipv4/tcp.c | 7 |
1 files changed, 4 insertions, 3 deletions
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index c567d5e8053e..f713c0422f0f 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2092,7 +2092,7 @@ static int tcp_zerocopy_vm_insert_batch_error(struct vm_area_struct *vma, maybe_zap_len = total_bytes_to_map - /* All bytes to map */ *length + /* Mapped or pending */ (pages_remaining * PAGE_SIZE); /* Failed map. */ - zap_page_range(vma, *address, maybe_zap_len); + zap_page_range_single(vma, *address, maybe_zap_len, NULL); err = 0; } @@ -2100,7 +2100,7 @@ static int tcp_zerocopy_vm_insert_batch_error(struct vm_area_struct *vma, unsigned long leftover_pages = pages_remaining; int bytes_mapped; - /* We called zap_page_range, try to reinsert. */ + /* We called zap_page_range_single, try to reinsert. */ err = vm_insert_pages(vma, *address, pending_pages, &pages_remaining); @@ -2234,7 +2234,8 @@ static int tcp_zerocopy_receive(struct sock *sk, total_bytes_to_map = avail_len & ~(PAGE_SIZE - 1); if (total_bytes_to_map) { if (!(zc->flags & TCP_RECEIVE_ZEROCOPY_FLAG_TLB_CLEAN_HINT)) - zap_page_range(vma, address, total_bytes_to_map); + zap_page_range_single(vma, address, total_bytes_to_map, + NULL); zc->length = total_bytes_to_map; zc->recv_skip_hint = 0; } else { |
