diff options
| author | Marc Zyngier <[email protected]> | 2024-12-17 14:23:13 +0000 |
|---|---|---|
| committer | Marc Zyngier <[email protected]> | 2025-01-02 19:19:09 +0000 |
| commit | 338f8ea51944d02ea29eadb3d5fa9196e74a100d (patch) | |
| tree | ea99c5a6a28f4191a96420e7929a84fb4ba596da /include/kvm/arm_arch_timer.h | |
| parent | KVM: arm64: nv: Use FEAT_ECV to trap access to EL0 timers (diff) | |
| download | kernel-338f8ea51944d02ea29eadb3d5fa9196e74a100d.tar.gz kernel-338f8ea51944d02ea29eadb3d5fa9196e74a100d.zip | |
KVM: arm64: nv: Accelerate EL0 timer read accesses when FEAT_ECV in use
Although FEAT_ECV allows us to correctly emulate the timers, it also
reduces performances pretty badly.
Mitigate this by emulating the CTL/CVAL register reads in the
inner run loop, without returning to the general kernel.
Acked-by: Oliver Upton <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Marc Zyngier <[email protected]>
Diffstat (limited to 'include/kvm/arm_arch_timer.h')
| -rw-r--r-- | include/kvm/arm_arch_timer.h | 15 |
1 files changed, 15 insertions, 0 deletions
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h index 6e3f6b7ff2b2..c1ba31fab6f5 100644 --- a/include/kvm/arm_arch_timer.h +++ b/include/kvm/arm_arch_timer.h @@ -156,4 +156,19 @@ static inline bool has_cntpoff(void) return (has_vhe() && cpus_have_final_cap(ARM64_HAS_ECV_CNTPOFF)); } +static inline u64 timer_get_offset(struct arch_timer_context *ctxt) +{ + u64 offset = 0; + + if (!ctxt) + return 0; + + if (ctxt->offset.vm_offset) + offset += *ctxt->offset.vm_offset; + if (ctxt->offset.vcpu_offset) + offset += *ctxt->offset.vcpu_offset; + + return offset; +} + #endif |
