diff options
| author | Jakub Kicinski <[email protected]> | 2025-04-14 23:30:35 +0000 |
|---|---|---|
| committer | Jakub Kicinski <[email protected]> | 2025-04-14 23:30:36 +0000 |
| commit | 63ce43f2d7da1f863f43fb1bcc9422466887dc6c (patch) | |
| tree | f45d18f63f5071e158df2f9cee8d6f1c7a4ca17a /net/core/skbuff.c | |
| parent | Merge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/... (diff) | |
| parent | page_pool: Track DMA-mapped pages and unmap them when destroying the pool (diff) | |
| download | kernel-63ce43f2d7da1f863f43fb1bcc9422466887dc6c.tar.gz kernel-63ce43f2d7da1f863f43fb1bcc9422466887dc6c.zip | |
Merge branch 'fix-late-dma-unmap-crash-for-page-pool'
Toke Høiland-Jørgensen says:
====================
Fix late DMA unmap crash for page pool
This series fixes the late dma_unmap crash for page pool first reported
by Yonglong Liu in [0]. It is an alternative approach to the one
submitted by Yunsheng Lin, most recently in [1]. The first commit just
wraps some tests in a helper function, in preparation of the main change
in patch 2. See the commit message of patch 2 for the details.
[0] https://lore.kernel.org/[email protected]
[1] https://lore.kernel.org/[email protected]
v8: https://lore.kernel.org/[email protected]
v7: https://lore.kernel.org/[email protected]
v6: https://lore.kernel.org/[email protected]
v5: https://lore.kernel.org/[email protected]
v4: https://lore.kernel.org/[email protected]
v3: https://lore.kernel.org/[email protected]
v2: https://lore.kernel.org/[email protected]
v1: https://lore.kernel.org/[email protected]
====================
Link: https://patch.msgid.link/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
Diffstat (limited to 'net/core/skbuff.c')
| -rw-r--r-- | net/core/skbuff.c | 16 |
1 files changed, 2 insertions, 14 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 6cbf77bc61fc..74a2d886a35b 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -893,11 +893,6 @@ static void skb_clone_fraglist(struct sk_buff *skb) skb_get(list); } -static bool is_pp_netmem(netmem_ref netmem) -{ - return (netmem_get_pp_magic(netmem) & ~0x3UL) == PP_SIGNATURE; -} - int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb, unsigned int headroom) { @@ -995,14 +990,7 @@ bool napi_pp_put_page(netmem_ref netmem) { netmem = netmem_compound_head(netmem); - /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation - * in order to preserve any existing bits, such as bit 0 for the - * head page of compound page and bit 1 for pfmemalloc page, so - * mask those bits for freeing side when doing below checking, - * and page_is_pfmemalloc() is checked in __page_pool_put_page() - * to avoid recycling the pfmemalloc page. - */ - if (unlikely(!is_pp_netmem(netmem))) + if (unlikely(!netmem_is_pp(netmem))) return false; page_pool_put_full_netmem(netmem_get_pp(netmem), netmem, false); @@ -1042,7 +1030,7 @@ static int skb_pp_frag_ref(struct sk_buff *skb) for (i = 0; i < shinfo->nr_frags; i++) { head_netmem = netmem_compound_head(shinfo->frags[i].netmem); - if (likely(is_pp_netmem(head_netmem))) + if (likely(netmem_is_pp(head_netmem))) page_pool_ref_netmem(head_netmem); else page_ref_inc(netmem_to_page(head_netmem)); |
