aboutsummaryrefslogtreecommitdiffstats
path: root/fs/btrfs/btrfs_inode.h
diff options
context:
space:
mode:
authorQu Wenruo <[email protected]>2025-01-24 06:59:58 +0000
committerDavid Sterba <[email protected]>2025-03-18 19:35:41 +0000
commitaa60fe12b4f49f49fc73e5023f8675e2df1f7805 (patch)
tree25f2baf75ea08e27d600e4879a674243e8266c5a /fs/btrfs/btrfs_inode.h
parentbtrfs: avoid assigning twice to block_start at btrfs_do_readpage() (diff)
downloadkernel-aa60fe12b4f49f49fc73e5023f8675e2df1f7805.tar.gz
kernel-aa60fe12b4f49f49fc73e5023f8675e2df1f7805.zip
btrfs: zlib: refactor S390x HW acceleration buffer preparation
Currently for s390x HW zlib compression, to get the best performance we need a buffer size which is larger than a page. This means we need to copy multiple pages into workspace->buf, then use that buffer as zlib compression input. Currently it's hardcoded using page sized folio, and all the handling are deep inside a loop. Refactor the code by: - Introduce a dedicated helper to do the buffer copy The new helper will be called copy_data_into_buffer(). - Add extra ASSERT()s * Make sure we only go into the function for hardware acceleration * Make sure we still get page sized folio - Prepare for future large folios This means we will rely on the folio size, other than PAGE_SIZE to do the copy. - Handle the folio mapping and unmapping inside the helper function For S390x hardware acceleration case, it never utilize the @data_in pointer, thus we can do folio mapping/unmapping all inside the function. Acked-by: Mikhail Zaslonko <[email protected]> Tested-by: Mikhail Zaslonko <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
Diffstat (limited to 'fs/btrfs/btrfs_inode.h')
0 files changed, 0 insertions, 0 deletions