diff options
| author | Mike Rapoport (Microsoft) <[email protected]> | 2025-05-09 07:46:21 +0000 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2025-05-13 06:50:39 +0000 |
| commit | b8a8f96a6dce527ad316184ff1e20f238ed413d8 (patch) | |
| tree | e1fe70993168e9b4503732158dcee00d088a3c02 /mm/mm_init.c | |
| parent | memblock: add support for scratch memory (diff) | |
| download | kernel-b8a8f96a6dce527ad316184ff1e20f238ed413d8.tar.gz kernel-b8a8f96a6dce527ad316184ff1e20f238ed413d8.zip | |
memblock: introduce memmap_init_kho_scratch()
With deferred initialization of struct page it will be necessary to
initialize memory map for KHO scratch regions early.
Add memmap_init_kho_scratch() method that will allow such initialization
in upcoming patches.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Mike Rapoport (Microsoft) <[email protected]>
Signed-off-by: Changyuan Lyu <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anthony Yznaga <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Ashish Kalra <[email protected]>
Cc: Ben Herrenschmidt <[email protected]>
Cc: Borislav Betkov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Woodhouse <[email protected]>
Cc: Eric Biederman <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: James Gowans <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Krzysztof Kozlowski <[email protected]>
Cc: Marc Rutland <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Pasha Tatashin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Pratyush Yadav <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Saravana Kannan <[email protected]>
Cc: Stanislav Kinsburskii <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleinxer <[email protected]>
Cc: Thomas Lendacky <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'mm/mm_init.c')
| -rw-r--r-- | mm/mm_init.c | 11 |
1 files changed, 8 insertions, 3 deletions
diff --git a/mm/mm_init.c b/mm/mm_init.c index c275ae561b6f..62d7f551b295 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -743,7 +743,7 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } -static void __meminit init_deferred_page(unsigned long pfn, int nid) +static void __meminit __init_deferred_page(unsigned long pfn, int nid) { if (early_page_initialised(pfn, nid)) return; @@ -763,11 +763,16 @@ static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } -static inline void init_deferred_page(unsigned long pfn, int nid) +static inline void __init_deferred_page(unsigned long pfn, int nid) { } #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ +void __meminit init_deferred_page(unsigned long pfn, int nid) +{ + __init_deferred_page(pfn, nid); +} + /* * Initialised pages do not have PageReserved set. This function is * called for each range allocated by the bootmem allocator and @@ -784,7 +789,7 @@ void __meminit reserve_bootmem_region(phys_addr_t start, if (pfn_valid(start_pfn)) { struct page *page = pfn_to_page(start_pfn); - init_deferred_page(start_pfn, nid); + __init_deferred_page(start_pfn, nid); /* * no need for atomic set_bit because the struct |
