diff options
| author | Qu Wenruo <[email protected]> | 2023-02-17 05:37:03 +0000 |
|---|---|---|
| committer | David Sterba <[email protected]> | 2023-04-17 16:01:14 +0000 |
| commit | 18d758a2d81a97b9a54a37d535870ce3170cc208 (patch) | |
| tree | 19b31f1bd0b4917f8648a5cbfd96b4780056a89a /fs/btrfs/compression.c | |
| parent | btrfs: use an efficient way to represent source of duplicated stripes (diff) | |
| download | kernel-18d758a2d81a97b9a54a37d535870ce3170cc208.tar.gz kernel-18d758a2d81a97b9a54a37d535870ce3170cc208.zip | |
btrfs: replace btrfs_io_context::raid_map with a fixed u64 value
In btrfs_io_context structure, we have a pointer raid_map, which
indicates the logical bytenr for each stripe.
But considering we always call sort_parity_stripes(), the result
raid_map[] is always sorted, thus raid_map[0] is always the logical
bytenr of the full stripe.
So why we waste the space and time (for sorting) for raid_map?
This patch will replace btrfs_io_context::raid_map with a single u64
number, full_stripe_start, by:
- Replace btrfs_io_context::raid_map with full_stripe_start
- Replace call sites using raid_map[0] to use full_stripe_start
- Replace call sites using raid_map[i] to compare with nr_data_stripes.
The benefits are:
- Less memory wasted on raid_map
It's sizeof(u64) * num_stripes vs sizeof(u64).
It'll always save at least one u64, and the benefit grows larger with
num_stripes.
- No more weird alloc_btrfs_io_context() behavior
As there is only one fixed size + one variable length array.
Signed-off-by: Qu Wenruo <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Diffstat (limited to 'fs/btrfs/compression.c')
0 files changed, 0 insertions, 0 deletions
