diff options
| author | Leon Romanovsky <[email protected]> | 2022-11-11 09:35:12 +0000 |
|---|---|---|
| committer | Leon Romanovsky <[email protected]> | 2022-11-11 09:35:12 +0000 |
| commit | 1ec5617432abc3efeec36c4e584a700f6c7e46f9 (patch) | |
| tree | fc5830462eb3afb740f9ed48052c1a46ff85f2c1 /mm/page_alloc.c | |
| parent | RDMA/rxe: Replace pr_xxx by rxe_dbg_xxx in rxe_mmap.c (diff) | |
| parent | net: mana: Define data structures for protection domain and memory registration (diff) | |
| download | kernel-1ec5617432abc3efeec36c4e584a700f6c7e46f9.tar.gz kernel-1ec5617432abc3efeec36c4e584a700f6c7e46f9.zip | |
Merge branch 'mana-shared-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
Long Li says:
====================
Introduce Microsoft Azure Network Adapter (MANA) RDMA driver [netdev prep]
The first 11 patches which modify the MANA Ethernet driver to support
RDMA driver.
* 'mana-shared-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
net: mana: Define data structures for protection domain and memory registration
net: mana: Define data structures for allocating doorbell page from GDMA
net: mana: Define and process GDMA response code GDMA_STATUS_MORE_ENTRIES
net: mana: Define max values for SGL entries
net: mana: Move header files to a common location
net: mana: Record port number in netdev
net: mana: Export Work Queue functions for use by RDMA driver
net: mana: Set the DMA device max segment size
net: mana: Handle vport sharing between devices
net: mana: Record the physical address for doorbell page region
net: mana: Add support for auxiliary device
====================
Link: https://lore.kernel.org/all/[email protected]/
Signed-off-by: Leon Romanovsky <[email protected]>
Diffstat (limited to 'mm/page_alloc.c')
| -rw-r--r-- | mm/page_alloc.c | 19 |
1 files changed, 12 insertions, 7 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e20ade858e71..218b28ee49ed 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -807,6 +807,7 @@ static void prep_compound_tail(struct page *head, int tail_idx) p->mapping = TAIL_MAPPING; set_compound_head(p, head); + set_page_private(p, 0); } void prep_compound_page(struct page *page, unsigned int order) @@ -5784,14 +5785,18 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, size_t size) { if (addr) { - unsigned long alloc_end = addr + (PAGE_SIZE << order); - unsigned long used = addr + PAGE_ALIGN(size); + unsigned long nr = DIV_ROUND_UP(size, PAGE_SIZE); + struct page *page = virt_to_page((void *)addr); + struct page *last = page + nr; - split_page(virt_to_page((void *)addr), order); - while (used < alloc_end) { - free_page(used); - used += PAGE_SIZE; - } + split_page_owner(page, 1 << order); + split_page_memcg(page, 1 << order); + while (page < --last) + set_page_refcounted(last); + + last = page + (1UL << order); + for (page += nr; page < last; page++) + __free_pages_ok(page, 0, FPI_TO_TAIL); } return (void *)addr; } |
