>>> When I imagine a system with a large memory space, for example 1 GB,
>>> running several hundreds of processes (thousands?) performing writes, all
>> If you have 1 GB of memory you can spare a few bytes for a
>> larger queue. Although against this, we can argue that such a large
>> system *should* already have an intelligent disk controller (RAID?) that
>> has sufficiently large buffers to do these optimizations.
> 1 GB -- a large system? Nah, that's next year's desktop!
Don't overdo it. I think 1G RAM won't be 'standard' before 2000. If we take
Moores Law (twice as much every 18 months for storage space and processor
power) we're at 64MB and 300 MHz now, we'd be at about 128-196MB and 500-600
MHz at the end of 1999 - at MOST.
> Microsoft Windows 3.1: could be run in 4-8 MB
> Microsoft Windows 95 & 98: prefers to run in 64-128 MB
Win95a (the very first one) was quite happy with 16 MB.
> therefore, Microsoft Windows 2000 will require....
One "advantage" of Win95 was, though, that your local egghead did stop selling
4MB PCs. RAM was expensive those days, and one of the things that KEPT users
from installing 'real' OS (like OS/2 or probably even Linux) and using them
was that most PCs didn't have more than 4 MB of RAM.
So the cycle was this: Get OS/2, install, see it crawling because 4M isn't
enough, start ranting, delete it, get Win95, see it crawling, upgrade your
machine because you suddenly don't seem to have a choice any more, So:
OS/2 is dog-slow -> OS/2 sucks
Win95 crawls -> your hardware needs to be upgraded
-- _ciao, Jens_______________________________ http://www.pinguin.conetix.de cat /dev/boiler/water | tea | sieve > /cup mount -t hdev /dev/human/mouth01 /mouth ; cat /cup >/mouth/gulp
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to email@example.com Please read the FAQ at http://www.tux.org/lkml/