Re: [PATCH 08/19] ceph: address space operations

From: Sage Weil
Date: Thu Jul 23 2009 - 14:22:14 EST


On Thu, 23 Jul 2009, Andi Kleen wrote:

> Sage Weil <sage@xxxxxxxxxxxx> writes:
>
> > The ceph address space methods are concerned primarily with managing
> > the dirty page accounting in the inode, which (among other things)
> > must keep track of which snapshot context each page was dirtied in,
> > and ensure that dirty data is written out to the OSDs in snapshort
> > order.
> >
> > A writepage() on a page that is not currently writeable due to
> > snapshot writeback ordering constraints is ignored (it was presumably
> > called from kswapd).
>
> Not a detailed review. You would need to get one from someone who
> knows the VFS interfaces very well (unfortunately those people are hard
> to find). I just read through it.
>
> One thing I noticed is that you seem to do a lot of memory allocation
> in the write out paths (some of it even GFP_KERNEL, not GFP_NOFS)

I fixed the bad GFP_KERNEL/uncheck kmalloc. Which was just a struct
pagevec, which we added a while ago to reduce stack usage. Maybe the
stack is a better place for it though? (cifs puts it on the stack...)

> There were some changes to make this problem less severe (e.g. better
> dirty pages accounting), but I don't think anyone has really declared
> it solved yet. The standard workaround for this is to use mempools
> for anything allocated in the writeout path, then you are at least
> guaranteed to make forward progress.

There are two other memory allocations during writeout: a vector of pages
to be written, and the message we're sending to the OSD. If I use a
mempool for those to guarantee as least some writeout will occur, how do I
safely defer when allocations do fail? Will pdflush (or it's replacement)
eventually come back and try ->writepages() again?

Thanks-
sage
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/