Mark Hahn wrote:
>
> > You're right. However, for each hash table operation to be O(1) the size
> > of the hash table must be >> n.
>
> there's at least one kind of HT where the table starts small
> and gets bigger, but at trivial cost (memcpy). while those
> memcpy's are O(n) each time, it's a little misleading to treat
> them as costing the same as O(n) rehashing.
>
memcpy() isn't exactly trivial, especially not when we're talking about
disk storage. Note, too, that we're talking about storage in a
filesystem, and random access a large, growable linear space (i.e. a
file) in a filesystem is O(log n) because of necessary inode indirection.
That's yet another reason I like the idea of using B-trees over hash
values: B-trees are O(log n), but do not need the file inode indirection
to do the job, so what you end up with is very nice and fast.
-hpa
-- <hpa@transmeta.com> at work, <hpa@zytor.com> in private! "Unix gives you enough rope to shoot yourself in the foot." http://www.zytor.com/~hpa/puzzle.txt - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Fri Feb 23 2001 - 21:00:25 EST