Re: rm a_large_file takes too long under linux-2.2.1 (also unSTOPable)

Matti Aarnio (matti.aarnio@sonera.fi)
Thu, 11 Feb 1999 15:47:12 +0200 (EET)


Sang Kang <kernel@mocha.sarang.net> wrote:
....
> For the removing speed issue, I remember there was a patch that can be
> applied to 2.0.36 that speeds up a deletion process quite a bit (on
> Linuxmama I believe). I don't know if there was any discussion whether
> how safe it is, but I loved the patch.
>
> For the stopping issue, it has been my habit to suspend( = STOP) and do
> 'bg' whenever I want to do something else immediately after issuing 'rm'.
> I don't see why the foreground process 'rm' must be blocked since whenever
> an 'rm' is issued, the kernel can simply launch a thread that removes
> the inodes - there is no point of blocking the process.

The criteria is not 'rm', like has been said here
earlier, the criteria is loosing the last reference
to the given file.

Lets say, we have a large directory with lots of files
to clean up, if each reference loss would cause a new
thread to be created without bounds, we would soon have
thousands of threads doing the deletes.

Even if such a thing would give fast response speeds,
there must be some limit at how many of them can be
running at the same time.

As the cleanups of small files would happen faster than
cleanups of large files, thus presuming the count of large
files to be deleted to be fairly small, one can guess that
a limit around 10-20 deleter threads would be all what the
doctor ordered. If the limit is exceeded, deleter blocks
until the count comes down again (and then starts a new
thread, and increments the count..)

Would you write it ? It should be a general VFS-layer thing.

> IMHO,
> Sang Kang

/Matti Aarnio <matti.aarnio@sonera.fi>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/