Re: [PATCH 00/16] erofs: prepare for folios, duplication and kill PG_error

From: Gao Xiang
Date: Thu Jul 14 2022 - 09:39:16 EST


On Thu, Jul 14, 2022 at 09:20:35PM +0800, Gao Xiang wrote:
> Hi folks,
>
> I've been doing this for almost 2 months, the main point of this is
> to support large folios and rolling hash deduplication for compressed
> data.
>
> This patchset is as a start of this work targeting for the next 5.20,
> it introduces a flexable range representation for (de)compressed buffers
> instead of too relying on page(s) directly themselves, so large folios
> can laterly base on this work. Also, this patchset gets rid of all
> PG_error flags in the decompression code. It's a cleanup as a result
> as well.
>
> In addition, this patchset kicks off rolling hash deduplication for
> compressed data by introducing fully-referenced multi-reference
> pclusters first instead of reporting fs corruption if one pcluster
> is introduced by several differnt extents. The full implementation
> is expected to be finished in the merge window after the next. One
> of my colleagues is actively working on the userspace part of this
> feature.
>
> However, it's still easy to verify fully-referenced multi-reference
> pcluster by constructing some image by hand (see attachment):
>
> Dataset: 300M
> seq-read (data-deduplicated, read_ahead_kb 8192): 1095MiB/s
> seq-read (data-deduplicated, read_ahead_kb 4096): 771MiB/s
> seq-read (data-deduplicated, read_ahead_kb 512): 577MiB/s
> seq-read (vanilla, read_ahead_kb 8192): 364MiB/s
>

Testdata above as attachment for reference.

Attachment: pat.erofs.xz
Description: Binary data