Re: DEVFSv50 and /dev/fb? (or /dev/fb/? ???)

Hans Reiser (reiser@idiom.com)
Fri, 07 Aug 1998 15:25:45 -0700


Shawn Leas wrote:

> On Fri, 7 Aug 1998, Chris Wedgwood wrote:
>
> > On Thu, Aug 06, 1998 at 03:56:16PM -0500, Shawn Leas wrote:
> >
> > > One with millions of inodes even with btree will be slow. I've
> > > benchmarked reiserfs, have you???
> >
> > Can you supply more details?
> >
> > Unless you btree is hosed, searching (say) 10 million records for a
> > key should be pretty fast...
>
> Oh yeah, and just remember when thinking about btree metadata, btree's in
> an FS is a little more complex than your college red/black or two three
> tree experiments... That was just memory, and you didn't care how it was
> arranged, just that the algorithm was sound. Magnetic media gets hairy,
> right Hans?
>
> -Shawn
> <=========== America Held Hostage ===========>
> Day 2025 for the poor and the middle class.
> Day 2044 for the rich and the dead.
> 897 days remaining in the Raw Deal.
> <============================================>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.rutgers.edu
> Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html

Yeah, I thought btrees in the FS would be simple until we tried it. I mean,
how complicated could it get....? We are still finding factor of two write
speedups with 4 line tuning changes.....
Now we are tuning reads.....

It can be worth remembering that no matter what you do, readdir on a large
directory will be slow, and a lot of utilities assume readdir is fast. I
think that if you user reiserfs for a million node directory it won't be
reiserfs that will collapse in performance, it will be things like the shell
and find.

With respect to your test, I assume you understand that if you had used a 1
million entry directory, it would be a more than 30 to 1 speedup. Also, you
should be aware that "find" not reiserfs is probably the bottleneck in your
test. I haven't really done good tests of how fast reiserfs is for really
large directories. I would have to carefully select the test so that it
wasn't the utility that was breaking under the impact of a 1 million entry
directory. I seem to have some memory last year of running benchmarks based
on find /testfs -name 'some_pattern' -exec some_fs_altering_action {} \; on
just a little 16k entry directory and seeing find consume most of the CPU as
user not kernel CPU. Oh well. I think I would need to write a c program to
benchmark reiserfs for large directories, and at the time I was trying to
write benchmarks that were simple enough to be precisely described in a paper
in a few lines at the time. Besides, why go into gory detail showing that
ext2fs can't handle large directories well was my thought. It wasn't
designed to do that, so it isn't fair to complain about it. Better that I
spend my time getting reiserfs ready to ship.....

Hans

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html