diff options
| author | Pavel Begunkov <[email protected]> | 2025-03-08 17:19:32 +0000 |
|---|---|---|
| committer | Jens Axboe <[email protected]> | 2025-03-10 13:14:18 +0000 |
| commit | 7a9dcb05f5501b07a2ef7d0ef743f4f17e9f3055 (patch) | |
| tree | da40e494297268af4556eb45d62744bd89988a43 /io_uring/io_uring.h | |
| parent | io_uring: cap cached iovec/bvec size (diff) | |
| download | kernel-7a9dcb05f5501b07a2ef7d0ef743f4f17e9f3055.tar.gz kernel-7a9dcb05f5501b07a2ef7d0ef743f4f17e9f3055.zip | |
io_uring: return -EAGAIN to continue multishot
Multishot errors can be mapped 1:1 to normal errors, but there are not
identical. It leads to a peculiar situation where all multishot requests
has to check in what context they're run and return different codes.
Unify them starting with EAGAIN / IOU_ISSUE_SKIP_COMPLETE(EIOCBQUEUED)
pair, which mean that core io_uring still owns the request and it should
be retried. In case of multishot it's naturally just continues to poll,
otherwise it might poll, use iowq or do any other kind of allowed
blocking. Introduce IOU_RETRY aliased to -EAGAIN for that.
Apart from obvious upsides, multishot can now also check for misuse of
IOU_ISSUE_SKIP_COMPLETE.
Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/da117b79ce72ecc3ab488c744e29fae9ba54e23b.1741453534.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
Diffstat (limited to 'io_uring/io_uring.h')
| -rw-r--r-- | io_uring/io_uring.h | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index daf0e3b740ee..3409740f6417 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -23,6 +23,14 @@ enum { IOU_ISSUE_SKIP_COMPLETE = -EIOCBQUEUED, /* + * The request has more work to do and should be retried. io_uring will + * attempt to wait on the file for eligible opcodes, but otherwise + * it'll be handed to iowq for blocking execution. It works for normal + * requests as well as for the multi shot mode. + */ + IOU_RETRY = -EAGAIN, + + /* * Requeue the task_work to restart operations on this request. The * actual value isn't important, should just be not an otherwise * valid error code, yet less than -MAX_ERRNO and valid internally. |
