Re: [RFC][PATCH] Per file dirty limit throttling

From: Nikanth Karthikesan
Date: Mon Aug 23 2010 - 08:17:03 EST


On Wednesday 18 August 2010 15:28:56 Peter Zijlstra wrote:
> On Wed, 2010-08-18 at 14:52 +0530, Nikanth Karthikesan wrote:
> > On Tuesday 17 August 2010 13:54:35 Peter Zijlstra wrote:
> > > On Tue, 2010-08-17 at 10:39 +0530, Nikanth Karthikesan wrote:
> > > > Oh, nice. Per-task limit is an elegant solution, which should help
> > > > during most of the common cases.
> > > >
> > > > But I just wonder what happens, when
> > > > 1. The dirtier is multiple co-operating processes
> > > > 2. Some app like a shell script, that repeatedly calls dd with seek
> > > > and skip? People do this for data deduplication, sparse skipping
> > > > etc.. 3. The app dies and comes back again. Like a VM that is
> > > > rebooted, and continues writing to a disk backed by a file on the
> > > > host.
> > > >
> > > > Do you think, in those cases this might still be useful?
> > >
> > > Those cases do indeed defeat the current per-task-limit, however I
> > > think the solution to that is to limit the amount of writeback done by
> > > each blocked process.
> >
> > Blocked on what? Sorry, I do not understand.
>
> balance_dirty_pages(), by limiting the work done there (or actually, the
> amount of page writeback completions you wait for -- starting IO isn't
> that expensive), you can also affect the time it takes, and therefore
> influence the impact.
>

But this has nothing special to do with the cases like multi-threaded dirtier,
which is why I was confused. :)

Thanks
Nikanth
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/