diff options
| author | Vlastimil Babka <[email protected]> | 2025-11-03 12:24:15 +0000 |
|---|---|---|
| committer | Vlastimil Babka <[email protected]> | 2025-11-06 07:13:12 +0000 |
| commit | c379b745e12a99f0a54bafaaf75fc710614511ce (patch) | |
| tree | d7c58fd5e3635bb3ccdf3dd9cd2dfd326685407e /lib/mpi/mpi-mod.c | |
| parent | slab: Fix obj_ext mistakenly considered NULL due to race condition (diff) | |
| download | kernel-c379b745e12a99f0a54bafaaf75fc710614511ce.tar.gz kernel-c379b745e12a99f0a54bafaaf75fc710614511ce.zip | |
slab: prevent infinite loop in kmalloc_nolock() with debugging
In review of a followup work, Harry noticed a potential infinite loop.
Upon closed inspection, it already exists for kmalloc_nolock() on a
cache with debugging enabled, since commit af92793e52c3 ("slab:
Introduce kmalloc_nolock() and kfree_nolock().")
When alloc_single_from_new_slab() fails to trylock node list_lock, we
keep retrying to get partial slab or allocate a new slab. If we indeed
interrupted somebody holding the list_lock, the trylock fill fail
deterministically and we end up allocating and defer-freeing slabs
indefinitely with no progress.
To fix it, fail the allocation if spinning is not allowed. This is
acceptable in the restricted context of kmalloc_nolock(), especially
with debugging enabled.
Reported-by: Harry Yoo <[email protected]>
Closes: https://lore.kernel.org/all/aQLqZjjq1SPD3Fml@hyeyoo/
Fixes: af92793e52c3 ("slab: Introduce kmalloc_nolock() and kfree_nolock().")
Acked-by: Alexei Starovoitov <[email protected]>
Reviewed-by: Harry Yoo <[email protected]>
Link: https://patch.msgid.link/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Diffstat (limited to 'lib/mpi/mpi-mod.c')
0 files changed, 0 insertions, 0 deletions
