Re: [dm-devel] [PATCH] staging: writeboost: Add dm-writeboost

From: Joe Thornber
Date: Tue Dec 09 2014 - 10:14:36 EST


On Mon, Dec 08, 2014 at 06:04:41AM +0900, Akira Hayakawa wrote:
> Mike and Alasdair,
> I need your ack

Hi Akira,

I just spent some time playing with your latest code. On the positive
side I am seeing some good performance with the fio tests. Which is
great, we know your design should outperform dm-cache with small
random io.

However I'm still getting v. poor results with the git-extract test,
which clones a linux kernel repo, and then checks out 5 revisions, all
with drop_caches in between.

I'll summarise the results I get here:


raw SSD: 69, 107
raw spindle: 73, 184

dm-cache: 74, 118

writeboost type 0: 115, 247
writeboost type 1: 193, 275


Each result consists of two numbers, the time to do the clone and the
time to do the extract.

Writeboost is significantly slower than the spindle alone for this
very simple test. I do not understand what is causing the issue. At
first I thought it was because the working set is larger than the SSD
space, but I get the same results even if there's more SSD space than
spindle.

Running the same test using SSD on SSD also yields v. poor results:
115, 177 and 198, 218 for type 0 and type 1 respectively. Obviously
this is a pointless configuration, but it does allow us to see the
overhead of the caching layer.

It's fine to have different benefits of the caching software depending
on the load. But I think the worst case should always be close to the
performance of the raw spindle device.

If you get the following work items done I will ack to go upstream:

i) Get this test so it's performance is similar to raw spindle.

ii) Write good documentation in Documentation/device-mapper/. eg. How
do I remove a cache? When should I use dm-writeboost rather than
bcache or dm-cache?

iii) Provide an equivalent to the fsck tool to repair a damaged cache.

- Joe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/