diff options
| author | Eric Dumazet <[email protected]> | 2008-11-24 07:24:32 +0000 |
|---|---|---|
| committer | David S. Miller <[email protected]> | 2008-11-24 07:24:32 +0000 |
| commit | 1f87e235e6fb92c2968b52b9191de04f1aff8e77 (patch) | |
| tree | ab774d239c61b6c206ef07398828533cdd01915e /net/unix/af_unix.c | |
| parent | axnet_cs: Fix build after net device ops ne2k conversion. (diff) | |
| download | kernel-1f87e235e6fb92c2968b52b9191de04f1aff8e77.tar.gz kernel-1f87e235e6fb92c2968b52b9191de04f1aff8e77.zip | |
eth: Declare an optimized compare_ether_addr_64bits() function
Linus mentioned we could try to perform long word operations, even
on potentially unaligned addresses, on x86 at least. David mentioned
the HAVE_EFFICIENT_UNALIGNED_ACCESS test to handle this on all
arches that have efficient unailgned accesses.
I tried this idea and got nice assembly on 32 bits:
158: 33 82 38 01 00 00 xor 0x138(%edx),%eax
15e: 33 8a 34 01 00 00 xor 0x134(%edx),%ecx
164: c1 e0 10 shl $0x10,%eax
167: 09 c1 or %eax,%ecx
169: 74 0b je 176 <eth_type_trans+0x87>
And very nice assembly on 64 bits of course (one xor, one shl)
Nice oprofile improvement in eth_type_trans(), 0.17 % instead of 0.41 %,
expected since we remove 8 instructions on a fast path.
This patch implements a compare_ether_addr_64bits() function, that
uses the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS ifdef to efficiently
perform the 6 bytes comparison on all capable arches.
Signed-off-by: Eric Dumazet <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Diffstat (limited to 'net/unix/af_unix.c')
0 files changed, 0 insertions, 0 deletions
