Re: page fault scalability (ext3, ext4, xfs)

From: Dave Chinner
Date: Wed Aug 14 2013 - 22:10:41 EST


On Wed, Aug 14, 2013 at 09:11:01PM -0400, Theodore Ts'o wrote:
> On Wed, Aug 14, 2013 at 04:38:12PM -0700, Andy Lutomirski wrote:
> > > It would be better to write zeros to it, so we aren't measuring the
> > > cost of the unwritten->written conversion.
> >
> > At the risk of beating a dead horse, how hard would it be to defer
> > this part until writeback?
>
> Part of the work has to be done at write time because we need to
> update allocation statistics (i.e., so that we don't have ENOSPC
> problems). The unwritten->written conversion does happen at writeback
> (as does the actual block allocation if we are doing delayed
> allocation).
>
> The point is that if the goal is to measure page fault scalability, we
> shouldn't have this other stuff happening as the same time as the page
> fault workload.

Sure, but the real problem is not the block mapping or allocation
path - even if the test is changed to take that out of the picture,
we still have timestamp updates being done on every single page
fault. ext4, XFS and btrfs all do transactional timestamp updates
and have nanosecond granularity, so every page fault is resulting in
a transaction to update the timestamp of the file being modified.

That's why on XFS the log is showing up in the profiles.

So, even if we narrow the test down to just overwriting existing
blocks, we've still got a filesystem transaction per page fault
being done. IOWs, it's still just a filesystem overhead test....

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/