diff options
| author | Weijie Yang <[email protected]> | 2014-10-13 22:51:03 +0000 |
|---|---|---|
| committer | Linus Torvalds <[email protected]> | 2014-10-14 00:18:12 +0000 |
| commit | 68faed630fc151a7a1c4853df00fb3dcacf782b4 (patch) | |
| tree | d7381acab66692202b689195ee13230d83044dd6 /lib/dynamic_debug.c | |
| parent | mm/slab: fix unaligned access on sparc64 (diff) | |
| download | kernel-68faed630fc151a7a1c4853df00fb3dcacf782b4.tar.gz kernel-68faed630fc151a7a1c4853df00fb3dcacf782b4.zip | |
mm/cma: fix cma bitmap aligned mask computing
The current cma bitmap aligned mask computation is incorrect. It could
cause an unexpected alignment when using cma_alloc() if the wanted align
order is larger than cma->order_per_bit.
Take kvm for example (PAGE_SHIFT = 12), kvm_cma->order_per_bit is set to
6. When kvm_alloc_rma() tries to alloc kvm_rma_pages, it will use 15 as
the expected align value. After using the current implementation however,
we get 0 as cma bitmap aligned mask other than 511.
This patch fixes the cma bitmap aligned mask calculation.
[[email protected]: coding-style fixes]
Signed-off-by: Weijie Yang <[email protected]>
Acked-by: Michal Nazarewicz <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: "Aneesh Kumar K.V" <[email protected]>
Cc: <[email protected]> [3.17]
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Diffstat (limited to 'lib/dynamic_debug.c')
0 files changed, 0 insertions, 0 deletions
