aboutsummaryrefslogtreecommitdiffstats
path: root/arch/riscv/include/asm/mmu.h
diff options
context:
space:
mode:
authorAndrew Waterman <[email protected]>2017-10-25 21:30:32 +0000
committerPalmer Dabbelt <[email protected]>2017-11-30 20:58:25 +0000
commit08f051eda33b51e8ee0f45f05bcfe49d0f0caf6b (patch)
tree46a1e3577de686377e859c7f346299e9ea726260 /arch/riscv/include/asm/mmu.h
parentRISC-V: Add VDSO entries for clock_get/gettimeofday/getcpu (diff)
downloadkernel-08f051eda33b51e8ee0f45f05bcfe49d0f0caf6b.tar.gz
kernel-08f051eda33b51e8ee0f45f05bcfe49d0f0caf6b.zip
RISC-V: Flush I$ when making a dirty page executable
The RISC-V ISA allows for instruction caches that are not coherent WRT stores, even on a single hart. As a result, we need to explicitly flush the instruction cache whenever marking a dirty page as executable in order to preserve the correct system behavior. Local instruction caches aren't that scary (our implementations actually flush the cache, but RISC-V is defined to allow higher-performance implementations to exist), but RISC-V defines no way to perform an instruction cache shootdown. When explicitly asked to do so we can shoot down remote instruction caches via an IPI, but this is a bit on the slow side. Instead of requiring an IPI to all harts whenever marking a page as executable, we simply flush the currently running harts. In order to maintain correct behavior, we additionally mark every other hart as needing a deferred instruction cache which will be taken before anything runs on it. Signed-off-by: Andrew Waterman <[email protected]> Signed-off-by: Palmer Dabbelt <[email protected]>
Diffstat (limited to 'arch/riscv/include/asm/mmu.h')
-rw-r--r--arch/riscv/include/asm/mmu.h4
1 files changed, 4 insertions, 0 deletions
diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
index 66805cba9a27..5df2dccdba12 100644
--- a/arch/riscv/include/asm/mmu.h
+++ b/arch/riscv/include/asm/mmu.h
@@ -19,6 +19,10 @@
typedef struct {
void *vdso;
+#ifdef CONFIG_SMP
+ /* A local icache flush is needed before user execution can resume. */
+ cpumask_t icache_stale_mask;
+#endif
} mm_context_t;
#endif /* __ASSEMBLY__ */