diff options
| author | Jinjiang Tu <[email protected]> | 2025-07-24 09:09:56 +0000 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2025-08-05 20:38:39 +0000 |
| commit | 45d19b4b6c2d422771c29b83462d84afcbb33f01 (patch) | |
| tree | 63c11d941c4b81fc3e8beb2101f0fdaa1c9579cc /fs | |
| parent | mm: fix the race between collapse and PT_RECLAIM under per-vma lock (diff) | |
| download | kernel-45d19b4b6c2d422771c29b83462d84afcbb33f01.tar.gz kernel-45d19b4b6c2d422771c29b83462d84afcbb33f01.zip | |
mm/smaps: fix race between smaps_hugetlb_range and migration
smaps_hugetlb_range() handles the pte without holdling ptl, and may be
concurrenct with migration, leaing to BUG_ON in pfn_swap_entry_to_page().
The race is as follows.
smaps_hugetlb_range migrate_pages
huge_ptep_get
remove_migration_ptes
folio_unlock
pfn_swap_entry_folio
BUG_ON
To fix it, hold ptl lock in smaps_hugetlb_range().
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 25ee01a2fca0 ("mm: hugetlb: proc: add hugetlb-related fields to /proc/PID/smaps")
Signed-off-by: Jinjiang Tu <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Cc: Andrei Vagin <[email protected]>
Cc: Andrii Nakryiko <[email protected]>
Cc: Baolin Wang <[email protected]>
Cc: Brahmajit Das <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dev Jain <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Joern Engel <[email protected]>
Cc: Kefeng Wang <[email protected]>
Cc: Lorenzo Stoakes <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Ryan Roberts <[email protected]>
Cc: Thiago Jung Bauermann <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'fs')
| -rw-r--r-- | fs/proc/task_mmu.c | 6 |
1 files changed, 5 insertions, 1 deletions
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3d6d8a9f13fc..55bab10bc779 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1148,10 +1148,13 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask, { struct mem_size_stats *mss = walk->private; struct vm_area_struct *vma = walk->vma; - pte_t ptent = huge_ptep_get(walk->mm, addr, pte); struct folio *folio = NULL; bool present = false; + spinlock_t *ptl; + pte_t ptent; + ptl = huge_pte_lock(hstate_vma(vma), walk->mm, pte); + ptent = huge_ptep_get(walk->mm, addr, pte); if (pte_present(ptent)) { folio = page_folio(pte_page(ptent)); present = true; @@ -1170,6 +1173,7 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask, else mss->private_hugetlb += huge_page_size(hstate_vma(vma)); } + spin_unlock(ptl); return 0; } #else |
