Re: [PATCH v3] ocfs2: Let ocfs2_setattr use new truncate sequence.

From: Tao Ma
Date: Thu Jun 10 2010 - 04:45:23 EST




On 06/10/2010 04:27 PM, Christoph Hellwig wrote:
On Thu, Jun 10, 2010 at 01:08:05PM +0800, Tao Ma wrote:
Let ocfs2 use the new truncate sequence. The changes include:
1. Use truncate_setsize directly since we don't implement our
own ->truncate and what we need is "update i_size and
truncate_pagecache" which truncate_setsize now does.
2. For direct write, ocfs2 actually don't allow write to pass
i_size(see ocfs2_prepare_inode_for_write), so we don't have
a chance to increase i_size. So remove the bogus check.

You just leave the duplicate inode_newsize_ok in, but still have
one as part of inode_change_ok. See the previous thread - we'll
need to move inode_change_ok to under the cluster locks, both
for the truncate and non-truncate case.
uh, I just don't change the original inode_change_ok, and maybe you are right that we should check all these under cluster lock. But it looks as if it is written like this intentionally.

Mark and Joel, do you have any option that why we write like this or it is a bug?

/*
+ * Since all the work for a size change has been done above.
+ * Call truncate_setsize directly to change size and truncate
+ * pagecache.
*/
if ((attr->ia_valid& ATTR_SIZE)&&
+ attr->ia_size != inode->i_size)

this could be on one line now.
ok, I will regenerate the patch after I get the feedback from Mark and Joel.

+ truncate_setsize(inode, attr->ia_size);

But any reason this isn't done inside the

if (size_change&& attr->ia_size != inode->i_size) {

conditional above? You'll never get size and uid/gid changes in the
same request, so there won't be any change in behaviour.
Because we want the inode change in a transaction. In the above condition, we do truncate/extend in a transaction. And after it is done, we start a new transaction that update the inode info.

Regards,
Tao
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/