Re : Re: [PATCH] Squashfs: add asynchronous read support
From: Chanho Min
Date: Tue Dec 17 2013 - 23:29:45 EST
> I did test it on x86 with USB stick and ARM with eMMC on my Nexus 4.
> In experiment, I couldn't see much gain like you both system and even it
> was regressed at bs=32k test, maybe workqueue allocation/schedule of work
> per I/O.
> Your test is rather special or what I am missing?
Can you specify your test result on ARM with eMMC.
> Before that, I'd like to know fundamental reason why your implementation
> for asynchronous read enhance. At a first glance, I thought it's caused by
> readahead from MM layer but when I read code, I found I was wrong.
> MM's readahead logic works based on PageReadahead marker but squashfs
> invalidates by grab_cache_page_nowait so it wouldn't work as we expected.
>
> Another possibility is block I/O merging in block layder by plugging logic,
> which was what I tried a few month ago although implementation was really
> bad. But it wouldn't work with your patch because do_generic_file_read
> will unplug block layer by lock_page without merging enough I/O.
>
> So, what do you think real actuator for enhance your experiment?
> Then, I could investigate why I can't get a benefit.
Currently, squashfs adds request to the block device queue synchronously with
wait for competion. mmc takes this request one by one and push them to host driver,
But it allows mmc to be idle frequently. This patch allows to add block requset
asynchrously without wait for competion, mmcqd can fetch a lot of request from block
at a time. As a result, mmcqd get busy and use a more bandwidth of mmc.
For test, I added two count variables in mmc_queue_thread as bellows
and tested same dd transfer.
static int mmc_queue_thread(void *d)
{
..
do {
if (req || mq->mqrq_prev->req) {
fetch++;
} else {
idle++;
}
} while (1);
..
}
without patch:
fetch: 920, idle: 460
with patch
fetch: 918, idle: 40
Thanks
Chanho.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/