diff options
| author | Leon Romanovsky <[email protected]> | 2022-11-11 09:35:12 +0000 |
|---|---|---|
| committer | Leon Romanovsky <[email protected]> | 2022-11-11 09:35:12 +0000 |
| commit | 1ec5617432abc3efeec36c4e584a700f6c7e46f9 (patch) | |
| tree | fc5830462eb3afb740f9ed48052c1a46ff85f2c1 /net/ipv4/tcp_ipv4.c | |
| parent | RDMA/rxe: Replace pr_xxx by rxe_dbg_xxx in rxe_mmap.c (diff) | |
| parent | net: mana: Define data structures for protection domain and memory registration (diff) | |
| download | kernel-1ec5617432abc3efeec36c4e584a700f6c7e46f9.tar.gz kernel-1ec5617432abc3efeec36c4e584a700f6c7e46f9.zip | |
Merge branch 'mana-shared-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
Long Li says:
====================
Introduce Microsoft Azure Network Adapter (MANA) RDMA driver [netdev prep]
The first 11 patches which modify the MANA Ethernet driver to support
RDMA driver.
* 'mana-shared-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
net: mana: Define data structures for protection domain and memory registration
net: mana: Define data structures for allocating doorbell page from GDMA
net: mana: Define and process GDMA response code GDMA_STATUS_MORE_ENTRIES
net: mana: Define max values for SGL entries
net: mana: Move header files to a common location
net: mana: Record port number in netdev
net: mana: Export Work Queue functions for use by RDMA driver
net: mana: Set the DMA device max segment size
net: mana: Handle vport sharing between devices
net: mana: Record the physical address for doorbell page region
net: mana: Add support for auxiliary device
====================
Link: https://lore.kernel.org/all/[email protected]/
Signed-off-by: Leon Romanovsky <[email protected]>
Diffstat (limited to 'net/ipv4/tcp_ipv4.c')
| -rw-r--r-- | net/ipv4/tcp_ipv4.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 7a250ef9d1b7..87d440f47a70 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1874,11 +1874,13 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb, __skb_push(skb, hdrlen); no_coalesce: + limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1); + /* Only socket owner can try to collapse/prune rx queues * to reduce memory overhead, so add a little headroom here. * Few sockets backlog are possibly concurrently non empty. */ - limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf) + 64*1024; + limit += 64 * 1024; if (unlikely(sk_add_backlog(sk, skb, limit))) { bh_unlock_sock(sk); |
