aboutsummaryrefslogtreecommitdiffstats
path: root/fs/ext4/fast_commit.c
diff options
context:
space:
mode:
authorDavid Stevens <[email protected]>2021-09-29 02:33:00 +0000
committerJoerg Roedel <[email protected]>2021-09-29 10:50:42 +0000
commit2cbc61a1b1665c84282dbf2b1747ffa0b6248639 (patch)
treeed387add6bbcd2a61edef807e0e7e2daeb63c476 /fs/ext4/fast_commit.c
parentswiotlb: Support aligned swiotlb buffers (diff)
downloadkernel-2cbc61a1b1665c84282dbf2b1747ffa0b6248639.tar.gz
kernel-2cbc61a1b1665c84282dbf2b1747ffa0b6248639.zip
iommu/dma: Account for min_align_mask w/swiotlb
Pass the non-aligned size to __iommu_dma_map when using swiotlb bounce buffers in iommu_dma_map_page, to account for min_align_mask. To deal with granule alignment, __iommu_dma_map maps iova_align(size + iova_off) bytes starting at phys - iova_off. If iommu_dma_map_page passes aligned size when using swiotlb, then this becomes iova_align(iova_align(orig_size) + iova_off). Normally iova_off will be zero when using swiotlb. However, this is not the case for devices that set min_align_mask. When iova_off is non-zero, __iommu_dma_map ends up mapping an extra page at the end of the buffer. Beyond just being a security issue, the extra page is not cleaned up by __iommu_dma_unmap. This causes problems when the IOVA is reused, due to collisions in the iommu driver. Just passing the original size is sufficient, since __iommu_dma_map will take care of granule alignment. Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask") Signed-off-by: David Stevens <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joerg Roedel <[email protected]>
Diffstat (limited to 'fs/ext4/fast_commit.c')
0 files changed, 0 insertions, 0 deletions