diff options
| author | David Woodhouse <[email protected]> | 2025-04-23 13:33:43 +0000 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2025-05-13 06:50:44 +0000 |
| commit | 31cf0dd94509eb61e7242e217aea9604621f6b6d (patch) | |
| tree | 2b13e87fa2d79653d29ddbf371ad06afe7c05873 /mm/mm_init.c | |
| parent | mm: use for_each_valid_pfn() in memory_hotplug (diff) | |
| download | kernel-31cf0dd94509eb61e7242e217aea9604621f6b6d.tar.gz kernel-31cf0dd94509eb61e7242e217aea9604621f6b6d.zip | |
mm/mm_init: use for_each_valid_pfn() in init_unavailable_range()
Currently, memmap_init initializes pfn_hole with 0 instead of
ARCH_PFN_OFFSET. Then init_unavailable_range will start iterating each
page from the page at address zero to the first available page, but it
won't do anything for pages below ARCH_PFN_OFFSET because pfn_valid
won't pass.
If ARCH_PFN_OFFSET is very large (e.g., something like 2^64-2GiB if the
kernel is used as a library and loaded at a very high address), the
pointless iteration for pages below ARCH_PFN_OFFSET will take a very long
time, and the kernel will look stuck at boot time.
Use for_each_valid_pfn() to skip the pointless iterations.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: David Woodhouse <[email protected]>
Reported-by: Ruihan Li <[email protected]>
Suggested-by: Mike Rapoport <[email protected]>
Reviewed-by: Mike Rapoport (Microsoft) <[email protected]>
Tested-by: Ruihan Li <[email protected]>
Tested-by: Lorenzo Stoakes <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Marc Rutland <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'mm/mm_init.c')
| -rw-r--r-- | mm/mm_init.c | 6 |
1 files changed, 1 insertions, 5 deletions
diff --git a/mm/mm_init.c b/mm/mm_init.c index 7191703a5820..1c5444e188f8 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -851,11 +851,7 @@ static void __init init_unavailable_range(unsigned long spfn, unsigned long pfn; u64 pgcnt = 0; - for (pfn = spfn; pfn < epfn; pfn++) { - if (!pfn_valid(pageblock_start_pfn(pfn))) { - pfn = pageblock_end_pfn(pfn) - 1; - continue; - } + for_each_valid_pfn(pfn, spfn, epfn) { __init_single_page(pfn_to_page(pfn), pfn, zone, node); __SetPageReserved(pfn_to_page(pfn)); pgcnt++; |
