The criteria is not 'rm', like has been said here
earlier, the criteria is loosing the last reference
to the given file.
Lets say, we have a large directory with lots of files
to clean up, if each reference loss would cause a new
thread to be created without bounds, we would soon have
thousands of threads doing the deletes.
Even if such a thing would give fast response speeds,
there must be some limit at how many of them can be
running at the same time.
As the cleanups of small files would happen faster than
cleanups of large files, thus presuming the count of large
files to be deleted to be fairly small, one can guess that
a limit around 10-20 deleter threads would be all what the
doctor ordered. If the limit is exceeded, deleter blocks
until the count comes down again (and then starts a new
thread, and increments the count..)
Would you write it ? It should be a general VFS-layer thing.
> IMHO,
> Sang Kang
/Matti Aarnio <matti.aarnio@sonera.fi>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/