Re: [PATCH] Speed up the cdrw packet writing driver

From: Peter Osterlund
Date: Sat Aug 28 2004 - 16:02:23 EST


Andrew Morton <akpm@xxxxxxxx> writes:

> Peter Osterlund <petero2@xxxxxxxxx> wrote:
> >
> > Is this a general VM limitation? Has anyone been able to saturate the
> > write bandwidth of two different block devices at the same time, when
> > they operate at vastly different speeds (45MB/s vs 5MB/s), and when
> > the writes are large enough to cause memory pressure?
>
> I haven't explicitly tested the pdflush code in a while, and I never tested
> on devices with such disparate bandwidth. But it _should_ work.
>
> The basic deign of the pdflush writeback path is:
>
> for ( ; ; ) {
> for (each superblock) {
> if (no pdflush thread is working this sb's queue &&
> the superblock's backingdev is not congested) {
> do some writeout, up to congestion, trying
> to not block on request queue exhaustion
> }
> }
> blk_congestion_wait()
> }
>
> So it basically spins around all the queues keeping them full in a
> non-blocking manner.
>
> There _are_ times when pdflush will accidentally block. Say, doing a
> metadata read. In that case other pdflush instances will keep other queues
> busy.
>
> I tested it up to 12 disks. Works OK.

OK, this should make sure that dirty data is flushed as fast as the
disks can handle, but is there anything that makes sure there will be
enough dirty data to flush for each disk?

Assume there are two processes writing one file each to two different
block devices. To be able to dirty more data a process will have to
allocate a page, and a page becomes available whenever one of the
disks finishes an I/O operation. If both processes have a 50/50 chance
to get the freed page, they will dirty data equally fast once steady
state has been reached, even if the block devices have very different
write bandwidths.

--
Peter Osterlund - petero2@xxxxxxxxx
http://w1.894.telia.com/~u89404340
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/