diff options
| author | David Hildenbrand <[email protected]> | 2024-06-07 09:09:37 +0000 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2024-07-04 02:30:18 +0000 |
| commit | 503b158fc30f203a1854c87183ca3467c6466001 (patch) | |
| tree | ff936e821b2fe004f79968dddadbfe56655789bb /mm/mm_init.c | |
| parent | mm: pass meminit_context to __free_pages_core() (diff) | |
| download | kernel-503b158fc30f203a1854c87183ca3467c6466001.tar.gz kernel-503b158fc30f203a1854c87183ca3467c6466001.zip | |
mm/memory_hotplug: initialize memmap of !ZONE_DEVICE with PageOffline() instead of PageReserved()
We currently initialize the memmap such that PG_reserved is set and the
refcount of the page is 1. In virtio-mem code, we have to manually clear
that PG_reserved flag to make memory offlining with partially hotplugged
memory blocks possible: has_unmovable_pages() would otherwise bail out on
such pages.
We want to avoid PG_reserved where possible and move to typed pages
instead. Further, we want to further enlighten memory offlining code
about PG_offline: offline pages in an online memory section. One example
is handling managed page count adjustments in a cleaner way during memory
offlining.
So let's initialize the pages with PG_offline instead of PG_reserved.
generic_online_page()->__free_pages_core() will now clear that flag before
handing that memory to the buddy.
Note that the page refcount is still 1 and would forbid offlining of such
memory except when special care is take during GOING_OFFLINE as currently
only implemented by virtio-mem.
With this change, we can now get non-PageReserved() pages in the XEN
balloon list. From what I can tell, that can already happen via
decrease_reservation(), so that should be fine.
HV-balloon should not really observe a change: partial online memory
blocks still cannot get surprise-offlined, because the refcount of these
PageOffline() pages is 1.
Update virtio-mem, HV-balloon and XEN-balloon code to be aware that
hotplugged pages are now PageOffline() instead of PageReserved() before
they are handed over to the buddy.
We'll leave the ZONE_DEVICE case alone for now.
Note that self-hosted vmemmap pages will no longer be marked as
reserved. This matches ordinary vmemmap pages allocated from the buddy
during memory hotplug. Now, really only vmemmap pages allocated from
memblock during early boot will be marked reserved. Existing
PageReserved() checks seem to be handling all relevant cases correctly
even after this change.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: David Hildenbrand <[email protected]>
Acked-by: Oscar Salvador <[email protected]> [generic memory-hotplug bits]
Cc: Alexander Potapenko <[email protected]>
Cc: Dexuan Cui <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Eugenio Pérez <[email protected]>
Cc: Haiyang Zhang <[email protected]>
Cc: Jason Wang <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: "K. Y. Srinivasan" <[email protected]>
Cc: Marco Elver <[email protected]>
Cc: Michael S. Tsirkin <[email protected]>
Cc: Mike Rapoport (IBM) <[email protected]>
Cc: Oleksandr Tyshchenko <[email protected]>
Cc: Stefano Stabellini <[email protected]>
Cc: Wei Liu <[email protected]>
Cc: Xuan Zhuo <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'mm/mm_init.c')
| -rw-r--r-- | mm/mm_init.c | 10 |
1 files changed, 8 insertions, 2 deletions
diff --git a/mm/mm_init.c b/mm/mm_init.c index 03874f624b32..c4bd97d3697f 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -893,8 +893,14 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone page = pfn_to_page(pfn); __init_single_page(page, pfn, zone, nid); - if (context == MEMINIT_HOTPLUG) - __SetPageReserved(page); + if (context == MEMINIT_HOTPLUG) { +#ifdef CONFIG_ZONE_DEVICE + if (zone == ZONE_DEVICE) + __SetPageReserved(page); + else +#endif + __SetPageOffline(page); + } /* * Usually, we want to mark the pageblock MIGRATE_MOVABLE, |
