diff options
| author | Sean Christopherson <[email protected]> | 2025-05-23 00:11:38 +0000 |
|---|---|---|
| committer | Sean Christopherson <[email protected]> | 2025-06-24 19:51:07 +0000 |
| commit | 9c4fe6d1509b386ab78f27dfaa2d128be77dc2d2 (patch) | |
| tree | 9672a63cbae562bd9235e1f05f6e435d6dd5242d /arch/x86/kvm/svm/nested.c | |
| parent | KVM: x86: Use kvzalloc() to allocate VM struct (diff) | |
| download | kernel-9c4fe6d1509b386ab78f27dfaa2d128be77dc2d2.tar.gz kernel-9c4fe6d1509b386ab78f27dfaa2d128be77dc2d2.zip | |
KVM: x86/mmu: Defer allocation of shadow MMU's hashed page list
When the TDP MMU is enabled, i.e. when the shadow MMU isn't used until a
nested TDP VM is run, defer allocation of the array of hashed lists used
to track shadow MMU pages until the first shadow root is allocated.
Setting the list outside of mmu_lock is safe, as concurrent readers must
hold mmu_lock in some capacity, shadow pages can only be added (or removed)
from the list when mmu_lock is held for write, and tasks that are creating
a shadow root are serialized by slots_arch_lock. I.e. it's impossible for
the list to become non-empty until all readers go away, and so readers are
guaranteed to see an empty list even if they make multiple calls to
kvm_get_mmu_page_hash() in a single mmu_lock critical section.
Use smp_store_release() and smp_load_acquire() to access the hash table
pointer to ensure the stores to zero the lists are retired before readers
start to walk the list. E.g. if the compiler hoisted the store before the
zeroing of memory, for_each_gfn_valid_sp_with_gptes() could consume stale
kernel data.
Cc: James Houghton <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Sean Christopherson <[email protected]>
Diffstat (limited to 'arch/x86/kvm/svm/nested.c')
0 files changed, 0 insertions, 0 deletions
