diff options
| author | Mina Almasry <[email protected]> | 2024-09-10 17:14:49 +0000 |
|---|---|---|
| committer | Jakub Kicinski <[email protected]> | 2024-09-12 03:44:31 +0000 |
| commit | 8ab79ed50cf10f338465c296012500de1081646f (patch) | |
| tree | 5cefb830ff8e266f2423b7ad6e35826c5da4e812 /net/core/devmem.c | |
| parent | netdev: netdevice devmem allocator (diff) | |
| download | kernel-8ab79ed50cf10f338465c296012500de1081646f.tar.gz kernel-8ab79ed50cf10f338465c296012500de1081646f.zip | |
page_pool: devmem support
Convert netmem to be a union of struct page and struct netmem. Overload
the LSB of struct netmem* to indicate that it's a net_iov, otherwise
it's a page.
Currently these entries in struct page are rented by the page_pool and
used exclusively by the net stack:
struct {
unsigned long pp_magic;
struct page_pool *pp;
unsigned long _pp_mapping_pad;
unsigned long dma_addr;
atomic_long_t pp_ref_count;
};
Mirror these (and only these) entries into struct net_iov and implement
netmem helpers that can access these common fields regardless of
whether the underlying type is page or net_iov.
Implement checks for net_iov in netmem helpers which delegate to mm
APIs, to ensure net_iov are never passed to the mm stack.
Signed-off-by: Mina Almasry <[email protected]>
Reviewed-by: Pavel Begunkov <[email protected]>
Acked-by: Jakub Kicinski <[email protected]>
Link: https://patch.msgid.link/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
Diffstat (limited to 'net/core/devmem.c')
| -rw-r--r-- | net/core/devmem.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/net/core/devmem.c b/net/core/devmem.c index 9beb03763dc9..7efeb602cf45 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -18,6 +18,7 @@ #include <trace/events/page_pool.h> #include "devmem.h" +#include "page_pool_priv.h" /* Device memory support */ @@ -82,6 +83,10 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) index = offset / PAGE_SIZE; niov = &owner->niovs[index]; + niov->pp_magic = 0; + niov->pp = NULL; + atomic_long_set(&niov->pp_ref_count, 0); + return niov; } @@ -269,6 +274,8 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, for (i = 0; i < owner->num_niovs; i++) { niov = &owner->niovs[i]; niov->owner = owner; + page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov), + net_devmem_get_dma_addr(niov)); } virtual += len; |
