On Fri, 03 Apr 2009 14:59:12 -0400 Jeff Garzik <jeff@xxxxxxxxxx> wrote:..
Lennart Sorensen wrote:On Fri, Apr 03, 2009 at 10:46:34AM -0400, Mark Lord wrote:It's pretty painful for super-large files with lots of metadata.My Myth box here was running 2.6.18 when originally set up,Mythtv has a 'slow delete' option that I believe works by slowly
and even back then it still took *minutes* to delete large files.
So that part hasn't really changed much in the interim.
Because of the multi-minute deletes, the distro shutdown scripts
would fails, and power off the box while it was still writing
to the drives. Ouch.
That system has had XFS on it for the past year and a half now,
and for Myth, there's no reason not to use XFS. It's great!
truncating the file. Seems they believe that ext3 is bad at handling
large file deletes, so they try to spread out the pain. I don't remember
if that option is on by default or not. I turned it off.
yeah.
There's a dirty hack you can do where you append one byte to the file
every 4MB, across 1GB (say). That will then lay the file out on-disk as
one bitmap block
one data block
one bitmap block
one data block
one bitmap block
one data block
one bitmap block
one data block
<etc>
lots-of-data-blocks
So when the time comes to delete that gigabyte, the bitmaps blocks are
only one block apart, and reading them is much faster.
That was one of the gruesome hacks I did way back when I was in the
streaming video recording game.
Another was the slow-delete thing.
- open the file
- unlink the file
- now sit in a loop, slowly nibbling away at the tail with
ftruncate() until the file is gone.
The open/unlink was there so that if the system were to crash midway,
ext3 orphan recovery at reboot time would fully delete the remainder of
the file.