Re: Question regarding to store file system metadata in database
From: Al Viro
Date: Mon Mar 20 2006 - 14:56:53 EST
On Mon, Mar 20, 2006 at 02:36:51PM -0500, Xin Zhao wrote:
> OK. Now I have more experimental results.
>
> After excluding the cost of reading file list and do stat(), the
> insertion rate becomes 587/sec, instead of 300/sec. The query rate is
> 2137/sec. I am runing mysql 4.1.11. FC4, 2.8G CPU and 1G mem.
>
> 2137/sec seems to be good enough to handle pathname to inode
> resolving. Anyone has some statistics how many file open in a busy
> file system?
This is still ridiculously slow. From cold cache (i.e. with a lot of IO)
cp -rl linux-2.6 foo1 gives 1.2s here. That's at least about 50000
operations. On slower CPU, BTW, with half of the RAM you have.
Moreover,
al@duke:~/linux$ time mv foo1 foo2
real 0m0.002s
user 0m0.000s
sys 0m0.001s
Now, try _that_ on your setup. If you are using entire pathname as key,
you are FUBAR - updating key in 20-odd thousand of records is going to
_hurt_. If you are splitting the pathname into components, you've just
reproduced fs directory structure and had shown that your fs layout
is too fscking slow. Not to mention the fun with symlink implementation,
or handling mountpoints.
You are at least an order of magnitude off by performance (more, actually)
and I still don't see any reason for what you are trying to do.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/