Re: [PATCH 2/11] FUSE - core

From: Anton Altaparmakov
Date: Fri Jan 14 2005 - 09:16:37 EST


On Fri, 14 Jan 2005, Miklos Szeredi wrote:
> > > +static struct inode *fuse_alloc_inode(struct super_block *sb)
> > > +{
> > > + struct inode *inode;
> > > + struct fuse_inode *fi;
> > > +
> > > + inode = kmem_cache_alloc(fuse_inode_cachep, SLAB_KERNEL);
> >
> > This should probably be SLAB_NOFS as FUSE is a file system so you don't
> > want allocations to go off submitting i/o to your file system. Much
> > better to be safe and always use the _NOFS versions in you kernel fs code.
>
> Well, I don't think it matters in this case, since inode allocation is
> not part of processing I/O, so no deadlock is possible. See also
> alloc_inode() in fs/inode.c which also uses SLAB_KERNEL.

That may well be possible. I prefer to take a more cautious approach and
only use _NOFS variants within NTFS, whether it is strictly necessary or
not. That way I never need to worry as to whether a deadlock is or isn't
possible. The problem simply goes away...

[snip]
> In actual fact the whole GFP_NOFS argument doesn't apply to FUSE
> _at_all_, since dirty pages are never allowed. Which is because
> userspace can't easily be taught about GFP_NOFS allocations, and
> otherwise could deadlock on page writeback.

Yes, I remember the thread about the deadlock on page writeback.

I prefer the _NOFS regardless (and others will probably disagree) because
it also means that if a machine is seriously running out of memory the fs
will give up with -ENOMEM much more readily with _NOFS rather than
increasing the memory pressure even further. As I said, others probably
disagree with me...

Best regards,

Anton
--
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer / IRC: #ntfs on irc.freenode.net
WWW: http://linux-ntfs.sf.net/ & http://www-stu.christs.cam.ac.uk/~aia21/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/