aboutsummaryrefslogtreecommitdiffstats
path: root/include/net/af_unix.h
diff options
context:
space:
mode:
authorEric Dumazet <[email protected]>2024-04-23 12:56:20 +0000
committerJakub Kicinski <[email protected]>2024-04-25 19:15:02 +0000
commitec00ed472bdb7d0af840da68c8c11bff9f4d9caa (patch)
tree7b9bb64d8735b8f6f94c6b211959765da79f2f46 /include/net/af_unix.h
parentMerge tag 'wireless-next-2024-04-24' of git://git.kernel.org/pub/scm/linux/ke... (diff)
downloadkernel-ec00ed472bdb7d0af840da68c8c11bff9f4d9caa.tar.gz
kernel-ec00ed472bdb7d0af840da68c8c11bff9f4d9caa.zip
tcp: avoid premature drops in tcp_add_backlog()
While testing TCP performance with latest trees, I saw suspect SOCKET_BACKLOG drops. tcp_add_backlog() computes its limit with : limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1); limit += 64 * 1024; This does not take into account that sk->sk_backlog.len is reset only at the very end of __release_sock(). Both sk->sk_backlog.len and sk->sk_rmem_alloc could reach sk_rcvbuf in normal conditions. We should double sk->sk_rcvbuf contribution in the formula to absorb bubbles in the backlog, which happen more often for very fast flows. This change maintains decent protection against abuses. Fixes: c377411f2494 ("net: sk_add_backlog() take rmem_alloc into account") Signed-off-by: Eric Dumazet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
Diffstat (limited to 'include/net/af_unix.h')
0 files changed, 0 insertions, 0 deletions