diff options
| author | David S. Miller <[email protected]> | 2014-09-28 20:35:49 +0000 |
|---|---|---|
| committer | David S. Miller <[email protected]> | 2014-09-28 20:35:49 +0000 |
| commit | dc83d4d8f6c897022c974a00769b7a6efee6aed8 (patch) | |
| tree | 0d769e6155899c89ae95c3e31c79ce011eb96a39 /net/sched/cls_basic.c | |
| parent | net : optimize skb_release_data() (diff) | |
| parent | tcp: better TCP_SKB_CB layout to reduce cache line misses (diff) | |
| download | kernel-dc83d4d8f6c897022c974a00769b7a6efee6aed8.tar.gz kernel-dc83d4d8f6c897022c974a00769b7a6efee6aed8.zip | |
Merge branch 'tcp_skb_cb'
Eric Dumazet says:
====================
tcp: better TCP_SKB_CB layout
TCP had the assumption that IPCB and IP6CB are first members of skb->cb[]
This is fine, except that IPCB/IP6CB are used in TCP for a very short time
in input path.
What really matters for TCP stack is to get skb->next,
TCP_SKB_CB(skb)->seq, and TCP_SKB_CB(skb)->end_seq in the same cache line.
skb that are immediately consumed do not care because whole skb->cb[] is
hot in cpu cache, while skb that sit in wocket write queue or receive queues
do not need TCP_SKB_CB(skb)->header at all.
This patch set implements the prereq for IPv4, IPv6, and TCP to make this
possible. This makes TCP more efficient.
====================
Signed-off-by: David S. Miller <[email protected]>
Diffstat (limited to 'net/sched/cls_basic.c')
0 files changed, 0 insertions, 0 deletions
