Re: very poor ext3 write performance on big filesystems?

From: Tomasz Chmielewski
Date: Mon Feb 18 2008 - 11:17:26 EST


Theodore Tso schrieb:

Are there better choices than ext3 for a filesystem with lots of hardlinks? ext4, once it's ready? xfs?

All filesystems are going to have problems keeping inodes close to
directories when you have huge numbers of hard links.

I'd really need to know exactly what kind of operations you were
trying to do that were causing problems before I could say for sure.
Yes, you said you were removing unneeded files, but how were you doing
it? With rm -r of old hard-linked directories?

Yes, with rm -r.


How big are the
average files involved? Etc.

It's hard to estimate the average size of a file. I'd say there are not many files bigger than 50 MB.

Basically, it's a filesystem where backups are kept. Backups are made with BackupPC [1].

Imagine a full rootfs backup of 100 Linux systems.

Instead of compressing and writing "/bin/bash" 100 times for each separate system, we do it once, and hardlink. Then, keep 40 copies back, and you have 4000 hardlinks.

For individual or user files, the number of hardlinks will be smaller of course.

The directories I want to remove have usually a structure of a "normal" Linux rootfs, nothing special there (other than most of the files will have multiple hardlinks).


I noticed using write back helps a tiny bit, but as dm and md don't support write barriers, I'm not very eager to use it.


[1] http://backuppc.sf.net
http://backuppc.sourceforge.net/faq/BackupPC.html#some_design_issues



--
Tomasz Chmielewski
http://wpkg.org

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/