aboutsummaryrefslogtreecommitdiffstats
path: root/lib
diff options
context:
space:
mode:
authorAvi Kivity <[email protected]>2010-02-13 08:33:12 +0000
committerH. Peter Anvin <[email protected]>2010-02-13 21:37:56 +0000
commit0d1622d7f526311d87d7da2ee7dd14b73e45d3fc (patch)
treeeb97e7b70d96faabbbd32cfea8fa34ac5e12eef5 /lib
parentx86-64, rwsem: 64-bit xadd rwsem implementation (diff)
downloadkernel-0d1622d7f526311d87d7da2ee7dd14b73e45d3fc.tar.gz
kernel-0d1622d7f526311d87d7da2ee7dd14b73e45d3fc.zip
x86-64, rwsem: Avoid store forwarding hazard in __downgrade_write
The Intel Architecture Optimization Reference Manual states that a short load that follows a long store to the same object will suffer a store forwading penalty, particularly if the two accesses use different addresses. Trivially, a long load that follows a short store will also suffer a penalty. __downgrade_write() in rwsem incurs both penalties: the increment operation will not be able to reuse a recently-loaded rwsem value, and its result will not be reused by any recently-following rwsem operation. A comment in the code states that this is because 64-bit immediates are special and expensive; but while they are slightly special (only a single instruction allows them), they aren't expensive: a test shows that two loops, one loading a 32-bit immediate and one loading a 64-bit immediate, both take 1.5 cycles per iteration. Fix this by changing __downgrade_write to use the same add instruction on i386 and on x86_64, so that it uses the same operand size as all the other rwsem functions. Signed-off-by: Avi Kivity <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: H. Peter Anvin <[email protected]>
Diffstat (limited to 'lib')
0 files changed, 0 insertions, 0 deletions