diff options
| author | Pavel Begunkov <[email protected]> | 2022-06-16 09:22:12 +0000 |
|---|---|---|
| committer | Jens Axboe <[email protected]> | 2022-07-25 00:39:14 +0000 |
| commit | 9ca9fb24d5febccea354089c41f96a8ad0d853f8 (patch) | |
| tree | 1a08b01d113fce77e375769430ea06f40c2280d4 /io_uring/msg_ring.c | |
| parent | io_uring: propagate locking state to poll cancel (diff) | |
| download | kernel-9ca9fb24d5febccea354089c41f96a8ad0d853f8.tar.gz kernel-9ca9fb24d5febccea354089c41f96a8ad0d853f8.zip | |
io_uring: mutex locked poll hashing
Currently we do two extra spin lock/unlock pairs to add a poll/apoll
request to the cancellation hash table and remove it from there.
On the submission side we often already hold ->uring_lock and tw
completion is likely to hold it as well. Add a second cancellation hash
table protected by ->uring_lock. In concerns for latency because of a
need to have the mutex locked on the completion side, use the new table
only in following cases:
1) IORING_SETUP_SINGLE_ISSUER: only one task grabs uring_lock, so there
is little to no contention and so the main tw hander will almost
always end up grabbing it before calling callbacks.
2) IORING_SETUP_SQPOLL: same as with single issuer, only one task is
a major user of ->uring_lock.
3) apoll: we normally grab the lock on the completion side anyway to
execute the request, so it's free.
Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/1bbad9c78c454b7b92f100bbf46730a37df7194f.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Diffstat (limited to 'io_uring/msg_ring.c')
0 files changed, 0 insertions, 0 deletions
