Re: [RFC] Limit the size of the IPV4 route hash.

From: Robin Holt
Date: Fri Dec 10 2004 - 16:02:03 EST


>
> I can definitely say that just forcing it to use 1 page is wrong.
> Even ignoring your tests, your test was on a system that has 16K
> PAGE_SIZE. Other systems use 4K and 8K (and other) PAGE_SIZE
> values. This is why we make our calculations relative to PAGE_SHIFT.

I picked 1 page because it was not some magic small number that I would
need to justify. I also hoped that it would incite someone to respond.

>
> Also, 1 page even in your case is (assuming you are on a 64-bit platform,
> you didn't mention) going to give us 1024 hash chains. A reasonably
> busy web server will easily be talking to more than 1K unique hosts at
> a given point in time. This is especially true as slow long distance
> connections bunch up.

But 1k hosts is not the limit with a 16k page. There are 1k buckets,
but each is a list. A reasonably well designed hash will scale to greater
than one item per bucket. Additionally, for the small percentage of web
servers with enough network traffic that they will be affected by the
depth of the entries, they can set rhash_entries for their specific needs.

>
> Alexey Kuznetsov needed to use more than one page on his lowly
> i386 router in Russia, and this was circa 7 or 8 years ago.

And now he, for his special case, could set rhash_entries to increase
the size.

I realize I have a special case which highlighted the problem. My case
shows that not putting an upper limit or at least a drastically aggressive
non-linear growth cap does cause issues. For the really large system,
we were seeing a size of 512MB for the hash which was limited because
that was the largest amount of memory available on a single node. I can
not ever imagine this being a reasonable limit. Not with 512 cpus and
1024 network adapters could I envision that this level of hashing would
actually be advantageous given all the other lock contention that will
be seen.

Can we agree that a linear calculation based on num_physpages is probably
not the best algorithm. If so, should we make it a linear to a limit or
a logarithmically decreasing size to a limit? How do we determine that
limit point?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/