diff options
| author | Jakub Kicinski <[email protected]> | 2024-12-18 03:37:02 +0000 |
|---|---|---|
| committer | Jakub Kicinski <[email protected]> | 2024-12-18 03:37:57 +0000 |
| commit | 3a4130550998f23762184b0de4cc9163a3f2c49d (patch) | |
| tree | b6d9cd4e2333cf6afc87beb0bc7675bc73777006 /drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | |
| parent | Merge branch 'net-constify-struct-bin_attribute' (diff) | |
| parent | inetpeer: do not get a refcount in inet_getpeer() (diff) | |
| download | kernel-3a4130550998f23762184b0de4cc9163a3f2c49d.tar.gz kernel-3a4130550998f23762184b0de4cc9163a3f2c49d.zip | |
Merge branch 'inetpeer-reduce-false-sharing-and-atomic-operations'
Eric Dumazet says:
====================
inetpeer: reduce false sharing and atomic operations
After commit 8c2bd38b95f7 ("icmp: change the order of rate limits"),
there is a risk that a host receiving packets from an unique
source targeting closed ports is using a common inet_peer structure
from many cpus.
All these cpus have to acquire/release a refcount and update
the inet_peer timestamp (p->dtime)
Switch to pure RCU to avoid changing the refcount, and update
p->dtime only once per jiffy.
Tested:
DUT : 128 cores, 32 hw rx queues.
receiving 8,400,000 UDP packets per second, targeting closed ports.
Before the series:
- napi poll can not keep up, NIC drops 1,200,000 packets
per second.
- We use 20 % of cpu cycles
After this series:
- All packets are received (no more hw drops)
- We use 12 % of cpu cycles.
v1: https://lore.kernel.org/[email protected]
====================
Link: https://patch.msgid.link/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
Diffstat (limited to 'drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c')
0 files changed, 0 insertions, 0 deletions
