diff options
| author | Eric Dumazet <[email protected]> | 2019-10-18 22:20:05 +0000 |
|---|---|---|
| committer | David S. Miller <[email protected]> | 2019-10-19 19:21:53 +0000 |
| commit | 2a06b8982f8f2f40d03a3daf634676386bd84dbc (patch) | |
| tree | 76d330882a9159b334f734201e405121383b1be4 /drivers/net/xen-netback/interface.c | |
| parent | net: dsa: fix switch tree list (diff) | |
| download | kernel-2a06b8982f8f2f40d03a3daf634676386bd84dbc.tar.gz kernel-2a06b8982f8f2f40d03a3daf634676386bd84dbc.zip | |
net: reorder 'struct net' fields to avoid false sharing
Intel test robot reported a ~7% regression on TCP_CRR tests
that they bisected to the cited commit.
Indeed, every time a new TCP socket is created or deleted,
the atomic counter net->count is touched (via get_net(net)
and put_net(net) calls)
So cpus might have to reload a contended cache line in
net_hash_mix(net) calls.
We need to reorder 'struct net' fields to move @hash_mix
in a read mostly cache line.
We move in the first cache line fields that can be
dirtied often.
We probably will have to address in a followup patch
the __randomize_layout that was added in linux-4.13,
since this might break our placement choices.
Fixes: 355b98553789 ("netns: provide pure entropy for net_hash_mix()")
Signed-off-by: Eric Dumazet <[email protected]>
Reported-by: kernel test robot <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Diffstat (limited to 'drivers/net/xen-netback/interface.c')
0 files changed, 0 insertions, 0 deletions
