Re: Deleting large files

From: Christoph Hellwig
Date: Sun May 11 2008 - 07:17:09 EST


On Thu, May 08, 2008 at 11:19:06AM +0300, Matti Aarnio wrote:
> This very question has troubled SQUID developers. Whatever the system, unlink()
> that really does free diskspace does so with unbound timelimit and in services
> where one millisecond is long wait time, the solution has been to run separate
> subprocess that actually does the unlinks.
>
> Squid is not threaded software, and it was created long ago when threads were
> rare and implementations were different in subtle details --> no threads at all.

I'd call long times for the final unlink a bug in the filesystem.
There's not all that much to do when deleting a file. What you need to
do is basically return the allocated space to the free space allocator
and mark the inode as unused and return it to the inode allocator. The
first one may take quite a while with a indirect block scheme, but with
an extent based filesystem it shouldn't be a problem. The latter
shouldn't take too long either, and with a journaling filesystem it's
even easier because you can intent-log the inode deletion first and then
perform it later e.g. as part of a batched write-back of the inode
cluster.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/