diff options
| author | Sean Christopherson <[email protected]> | 2025-02-04 00:40:36 +0000 |
|---|---|---|
| committer | Sean Christopherson <[email protected]> | 2025-02-14 15:17:40 +0000 |
| commit | 4834eaded91e5c90141540ccfb1af2bd40a4ac80 (patch) | |
| tree | 849145e9f2bf96f1d3b98e754181e39d9d5c3188 /tools/testing/selftests/net/lib/py/utils.py | |
| parent | KVM: x86/mmu: Refactor low level rmap helpers to prep for walking w/o mmu_lock (diff) | |
| download | kernel-4834eaded91e5c90141540ccfb1af2bd40a4ac80.tar.gz kernel-4834eaded91e5c90141540ccfb1af2bd40a4ac80.zip | |
KVM: x86/mmu: Add infrastructure to allow walking rmaps outside of mmu_lock
Steal another bit from rmap entries (which are word aligned pointers, i.e.
have 2 free bits on 32-bit KVM, and 3 free bits on 64-bit KVM), and use
the bit to implement a *very* rudimentary per-rmap spinlock. The only
anticipated usage of the lock outside of mmu_lock is for aging gfns, and
collisions between aging and other MMU rmap operations are quite rare,
e.g. unless userspace is being silly and aging a tiny range over and over
in a tight loop, time between contention when aging an actively running VM
is O(seconds). In short, a more sophisticated locking scheme shouldn't be
necessary.
Note, the lock only protects the rmap structure itself, SPTEs that are
pointed at by a locked rmap can still be modified and zapped by another
task (KVM drops/zaps SPTEs before deleting the rmap entries)
Co-developed-by: James Houghton <[email protected]>
Signed-off-by: James Houghton <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Sean Christopherson <[email protected]>
Diffstat (limited to 'tools/testing/selftests/net/lib/py/utils.py')
0 files changed, 0 insertions, 0 deletions
