I've assembled quite a few small squid caches using Linux..
I've put the cache in a partition of it's own, and I've modified the
startup scripts so that if /cache wasn't cleanly unmounted.. It just
remakes the fs.
Furthermore the system programs are allmost all in /usr with is mounted
Losing the cache is bad.. But it's better then downtime..
The solition is prob Logging FSes.. But they tend to be slow..
I'd rather blow the cache on the increadibly rare improper shutdown then
be slow all the time or require a long check at boot..
Actually, if you ever have an improper shutdown, I expect something is
seriously wrong anyplace..
On Mon, 13 Apr 1998, Jordan Mendelson wrote:
> Hello all...
> I have a few Squid Internet Object cache servers which handle several
> thousand requests for data on the server per hour. Whenever this machine
> reboots, it always has problems on disk which it has to correct.
> It takes close to half an hour to scan all 9+ gigs of diskspace, which in
> itself isn't a good thing. Usually the boot requires us to go into single
> mode and scan it manually. Most of the problems e2fsck fixes are due to
> duplicated blocks.
> Is there anything I can do to improve this? 30 minutes seems like an
> eternity for such an important service.
> [Linux 2.0.34p7 + newest e2fs tools]
> Jordan Mendelson : http://jordy.wserv.com
> Web Services, Inc. : http://www.wserv.com
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to email@example.com
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org