Re: MD raid and different elevators (disk i/o schedulers) (fwd)

From: Mikael Abrahamsson
Date: Fri Jul 30 2010 - 08:51:21 EST



Hi, this might be more appropriate for lkml (or is there another place?) because people with knowledge of how these layers interact might be here and not on linux-raid-ml ?

If block cache is done on all levels and readahead is done on all levels, then quite a lot of redundant block information is going to exist in memory for all these layers? I can understand that it might make sense to keep block cache for the fs and perhaps for the drive layer, but md->dm(crypto)->lvm layers this might make less sense?

What about default readahead for these devices? Doing readahead on dm device might be bad in some situations, perhaps good in others?

---------- Forwarded message ----------
Date: Thu, 29 Jul 2010 12:53:35 +0200 (CEST)
From: Mikael Abrahamsson <swmike@xxxxxxxxx>
To: Fabio Muzzi <liste@xxxxxxxxxx>
Cc: linux-raid@xxxxxxxxxxxxxxx
Subject: Re: MD raid and different elevators (disk i/o schedulers)

On Thu, 29 Jul 2010, Fabio Muzzi wrote:

Is this true? Are there compatibility issues using different i/o schedulers with software raid?

I'd actually like to raise this one level further:

In the case of (drives)->md->dm(crypto)->lvm->fs, how do the schedulers, readahead settings, blocksizes, barriers etc interact thru all these layers? Is block caching done on all layers? Is readahead done on all layers?

--
Mikael Abrahamsson email: swmike@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/