Re: [PATCH 06/16] vfs: Rename generic_file_aio_write_nolock

From: Christoph Hellwig
Date: Thu Sep 03 2009 - 11:37:34 EST

On Thu, Sep 03, 2009 at 12:24:36PM +0200, Jan Kara wrote:
> > Move it to fs/block_dev.c, rename it to blkdev_aio_write, export it _GPL
> > only and make it very clear it's only for block devices and raw.
> Yes, fine with me. I'll replace my patch with yours so that we don't
> rename the function twice unnecessarily.

It's not a replacement, it's ontop of yours. But folding it into yours
would make a lot of sense.

> > And btw, I'm not actually sure it is the right thing for raw. Raw is
> > supposed to do direct I/O only, and in fact forced O_DIRECT on. Because
> > there are no holes it also can't fall back to direct I/O. So strictly
> > spreaking we could just use __generic_file_aio_write directly. That
> > is until we care about the hw disk caches..
> I'm slightly confused with the above - probably you mean it cannot fall
> back to buffered I/O and it could use generic_file_direct_write (because
> __generic_file_aio_write is just blkdev_aio_write without syncing in case
> of O_SYNC).

It can not fall back to buffered I/O, yes. Any given that it does not
not do buffered I/O and the block/raw device also doesn' have any
inode metadata we could just use __generic_file_aio_write directly.
That is until my patch to flush the disk cache in ->fsync goes in in
which case we'll at least need that one again. But we might just be
better off to opencode that instead of really using fsync - that avoids
the superflous call to filemap_write_and_wait and performs the cache
flush without i_mutex which we don't need.

That is the story for the block device, now the raw device is more
difficult as I would be surprised if the user of it used fsync on it.
Then again that would require us to find those users first, although
they apparently exist as removal of this horrible raw device feature
was vetoed by the big distros.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at