aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorDavid Howells <[email protected]>2025-03-14 16:41:58 +0000
committerChristian Brauner <[email protected]>2025-03-19 09:04:22 +0000
commit15e9aaf9fc494d1a7280bf1184b4b5830c095209 (patch)
treee06e06e1a73cd82a39fc3db3ac47b3770c60915c
parentnetfs: Call `invalidate_cache` only if implemented (diff)
downloadkernel-15e9aaf9fc494d1a7280bf1184b4b5830c095209.tar.gz
kernel-15e9aaf9fc494d1a7280bf1184b4b5830c095209.zip
netfs: Fix rolling_buffer_load_from_ra() to not clear mark bits
rolling_buffer_load_from_ra() looms large in the perf report because it loops around doing an atomic clear for each of the three mark bits per folio. However, this is both inefficient (it would be better to build a mask and atomically AND them out) and unnecessary as they shouldn't be set. Fix this by removing the loop. Fixes: ee4cdf7ba857 ("netfs: Speed up buffered reading") Signed-off-by: David Howells <[email protected]> Link: https://lore.kernel.org/r/[email protected] Acked-by: "Paulo Alcantara (Red Hat)" <[email protected]> cc: Jeff Layton <[email protected]> cc: Steve French <[email protected]> cc: Paulo Alcantara <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] Signed-off-by: Christian Brauner <[email protected]>
-rw-r--r--fs/netfs/rolling_buffer.c4
1 files changed, 0 insertions, 4 deletions
diff --git a/fs/netfs/rolling_buffer.c b/fs/netfs/rolling_buffer.c
index 75d97af14b4a..207b6a326651 100644
--- a/fs/netfs/rolling_buffer.c
+++ b/fs/netfs/rolling_buffer.c
@@ -146,10 +146,6 @@ ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll,
/* Store the counter after setting the slot. */
smp_store_release(&roll->next_head_slot, to);
-
- for (; ix < folioq_nr_slots(fq); ix++)
- folioq_clear(fq, ix);
-
return size;
}