>There are around 390,000 directories holding those files. Just how big did
>you want to the directory cache to get!?
I think that the easiest solution is to re-write squid to use some sort
of database instead of the file system. The file system works OK for a
small database (say INN spool for part of the comp hierarchy or a 500 meg
Squid cache). But when you want a really large database then I think that
an actual database is what you need. I believe that INN is going this way
in version 1.7.
Perhaps it would be worth your while in grabbing some INN 1.7 source
code, it's free so you can rip out the database stuff and put it in Squid
(the accounts I've heard suggest that INN 1.7 will support raw partitions
as databases and many other nice features). If you modify squid you just
modify one program, if it crashes it'll automatically restart. If you
modify the ext2 file system you'll have to change the FS drivers in the
kernel and the e2fsck and mke2fs programs. If it crashes your system stops
if you're lucky.
What squid currently does is convert internal index numbers into
dirname/dirname/filename combinations and then use these for accessing the
data. If it could use the index numbers to look up a database table
directly then it'll save a lot of stuffing around and should give great
performance increases.
-- ----------------------------------------------------------- In return for "mailbag contention" errors from buggy Exchange servers I'll set my mail server to refuse mail from your domain. The same response applies when a message to a postmaster account bounces. "Russell Coker - mailing lists account" <bofh@snoopy.virtual.net.au> -----------------------------------------------------------