diff options
| author | David Howells <[email protected]> | 2025-05-19 09:07:04 +0000 |
|---|---|---|
| committer | Christian Brauner <[email protected]> | 2025-05-21 12:35:21 +0000 |
| commit | 2b1424cd131cfaba4cf7040473133d26cddac088 (patch) | |
| tree | cf279e47bbadebd7cad6cd589641c4d99f0da644 /fs/netfs/direct_read.c | |
| parent | netfs: Fix the request's work item to not require a ref (diff) | |
| download | kernel-2b1424cd131cfaba4cf7040473133d26cddac088.tar.gz kernel-2b1424cd131cfaba4cf7040473133d26cddac088.zip | |
netfs: Fix wait/wake to be consistent about the waitqueue used
Fix further inconsistencies in the use of waitqueues
(clear_and_wake_up_bit() vs private waitqueue).
Move some of this stuff from the read and write sides into common code so
that it can be done in fewer places.
To make this work, async I/O needs to set NETFS_RREQ_OFFLOAD_COLLECTION to
indicate that a workqueue will do the collecting and places that call the
wait function need to deal with it returning the amount transferred.
Fixes: e2d46f2ec332 ("netfs: Change the read result collector to only use one work item")
Signed-off-by: David Howells <[email protected]>
Link: https://lore.kernel.org/[email protected]
cc: Marc Dionne <[email protected]>
cc: Steve French <[email protected]>
cc: Ihor Solodrai <[email protected]>
cc: Eric Van Hensbergen <[email protected]>
cc: Latchesar Ionkov <[email protected]>
cc: Dominique Martinet <[email protected]>
cc: Christian Schoenebeck <[email protected]>
cc: Paulo Alcantara <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
Signed-off-by: Christian Brauner <[email protected]>
Diffstat (limited to 'fs/netfs/direct_read.c')
| -rw-r--r-- | fs/netfs/direct_read.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index cb3c6dc0b165..a24e63d2c818 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -103,7 +103,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq) rreq->netfs_ops->issue_read(subreq); if (test_bit(NETFS_RREQ_PAUSE, &rreq->flags)) - netfs_wait_for_pause(rreq); + netfs_wait_for_paused_read(rreq); if (test_bit(NETFS_RREQ_FAILED, &rreq->flags)) break; if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) && @@ -115,7 +115,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq) if (unlikely(size > 0)) { smp_wmb(); /* Write lists before ALL_QUEUED. */ set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags); - netfs_wake_read_collector(rreq); + netfs_wake_collector(rreq); } return ret; |
