> This way, if files like databases are being randomly accessed and someone
> sequentially accesses a large file or a bunch of files, it won't throw out
> all of the database pages.
better testcase than the one I have used, but harder to check in the
end...
>
> The buffer cache should grow to fill available memory, it should even grow
> to cause executable data/code to be paged out, but only when there is a
> high likelihood that the pages already in the buffer cache will be used
> again in the near future.
just doing so would not impair responsiveness. I have checked a bit
further on
my initial post and it happened that all of the cache was filled with
pages
that were never reused (as should it should be), but of the swapped
pages
a lot had to be swapped in during the run of the test because the
applications
happened to be at sleep for just a tad too long to stay in memory (they
were actually waiting for some IO to happen that was slow because of the
tar going on and got swapped which was slow because of the IO and then
swapped in because of the IO ...)
>
> Jim
>
> (Former kernel hacker on a now-defunct operating system that was ahead of
> its time...)
Well I would say that this OS had the right idea. Why not implement such
a scheme for linux. This sounds like a fun project for my spare time, so
count me in on this...
Btw, wasn't PRIMOS a more or less a real time OS which was abused by
some companies to do accounting (which sucked because transactions were
a pain in the neck on these machines)? I remember faintly working with
such machines about a decade ago...
greetings
Karl Günter Wünsch
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu