[RFC 0/5] squashfs enhance

From: Minchan Kim
Date: Mon Sep 16 2013 - 03:09:24 EST


Our proudct have used squashfs for rootfs and it saves a few bucks
per device. Super thanks, Squashfs! You were a perfect for us.
But unfortunately, our device start to become complex so sometime
we need better throughput for sequential I/O but current squashfs
couldn't meet our usecase.

When I dive into the code, it has some problems.

1) Too many copy
2) Only a decompression stream buffer so that concurrent read stuck with that
3) short of readpages

This patchset try to solve above problems.

Just two patches are clean up so it shouldn't change any behavior.
And functions factored out will be used for later patches.
If they changes some behavior, it's not what I intended. :(

3rd patch removes cache usage for (not-fragemented, no-tail-end)
data pages so that we can reduce memory copy.

4th patch supports multiple decompress stream buffer so concurrent
read could be handled at the same time. When I tested experiment,
It obviously reduces a half time.

5th patch try to implement asynchronous readahead functions
so I found it can enhance about 35% with lots of I/O merging.

Any comments are welcome.
Thanks.

Minchan Kim (5):
squashfs: clean up squashfs_read_data
squashfs: clean up squashfs_readpage
squashfs: remove cache for normal data page
squashfs: support multiple decompress stream buffer
squashfs: support readpages

fs/squashfs/block.c | 245 +++++++++-----
fs/squashfs/cache.c | 16 +-
fs/squashfs/decompressor.c | 107 +++++-
fs/squashfs/decompressor.h | 27 +-
fs/squashfs/file.c | 738 ++++++++++++++++++++++++++++++++++++++----
fs/squashfs/lzo_wrapper.c | 12 +-
fs/squashfs/squashfs.h | 12 +-
fs/squashfs/squashfs_fs_sb.h | 11 +-
fs/squashfs/super.c | 44 ++-
fs/squashfs/xz_wrapper.c | 20 +-
fs/squashfs/zlib_wrapper.c | 12 +-
11 files changed, 1024 insertions(+), 220 deletions(-)

--
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/