aboutsummaryrefslogtreecommitdiffstats
path: root/lib/debugobjects.c
diff options
context:
space:
mode:
authorKenji Kaneshige <[email protected]>2008-11-10 04:54:43 +0000
committerJesse Barnes <[email protected]>2008-11-11 21:33:05 +0000
commit2485b8674bf5762822e14e1554938e36511c0ae4 (patch)
tree9594d7366d234f9b23c33da9b087c120562b0070 /lib/debugobjects.c
parentMerge branch 'timers-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/k... (diff)
downloadkernel-2485b8674bf5762822e14e1554938e36511c0ae4.tar.gz
kernel-2485b8674bf5762822e14e1554938e36511c0ae4.zip
PCI: ignore bit0 of _OSC return code
Currently acpi_run_osc() checks all the bits in _OSC result code (the first DWORD in the capabilities buffer) to see error condition. But the bit 0, which doesn't indicate any error, must be ignored. The bit 0 is used as the query flag at _OSC invocation time. Some platforms clear it during _OSC evaluation, but the others don't. On latter platforms, current acpi_run_osc() mis-detects error when _OSC is evaluated with query flag set because it doesn't ignore the bit 0. Because of this, the __acpi_query_osc() always fails on such platforms. And this is the cause of the problem that pci_osc_control_set() doesn't work since the commit 4e39432f4df544d3dfe4fc90a22d87de64d15815 which changed pci_osc_control_set() to use __acpi_query_osc(). Tested-by:"Tomasz Czernecki <[email protected]> Signed-off-by: Kenji Kaneshige <[email protected]> Signed-off-by: Jesse Barnes <[email protected]>
Diffstat (limited to 'lib/debugobjects.c')
0 files changed, 0 insertions, 0 deletions