Re: Swapping in 2.1.103?

Karl =?ISO-8859-1?Q?G=FCnter?= =?ISO-8859-1?Q?W=FCnsch (Karl.Guenter.Wuensch@neuss.netsurf.de)
Fri, 22 May 1998 09:27:38 +0200


Jim Wilcoxson wrote:
>
> I'm no Linux kernel guru, but it is very likely that there are parts of X
> and Netscape that may be executed once (so the pages come into memory after
> mapping the executable), but then are not used again. It wouldn't make
> sense to effectively "lock" these pages into memory and not make them
> available to the buffer cache just because they belong to an executable.
>
> However, having said, that, it also doesn't make sense that doing a tar of
> a filesystem should invalidate the entire buffer cache PLUS page out
> application data.
>
> An older OS I'm familiar with (Primos, from Prime Computer), distinguished
> between sequential and random access files and never used more than 1
> buffer for sequential files. This is hard(er) in Unix because there is no
> distinction, but perhaps there could be something like "if a file has never
> been repositioned, then after it is closed, mark its file buffers so that
> they will be re-used before paging out non-buffer-cache pages". Also,
> sequentially accessing a large file shouldn't wipe out the buffer cache.
> If a file has never been positioned while reading/writing, there is a good
> chance that the data in the buffer cache, except for the page where the
> file pointer is (or future pages for read-ahead) will not be needed in the
> near future. In this case, the number of buffers allocated to the large
> file should be bounded somehow, maybe to just a few buffers, and even these
> would be marked "highly available" after the file is closed. This of couse
> wouldn't apply to directory buffers, file indirect buffers, ...
>
precisely my thoughts.

> This way, if files like databases are being randomly accessed and someone
> sequentially accesses a large file or a bunch of files, it won't throw out
> all of the database pages.

better testcase than the one I have used, but harder to check in the
end...

>
> The buffer cache should grow to fill available memory, it should even grow
> to cause executable data/code to be paged out, but only when there is a
> high likelihood that the pages already in the buffer cache will be used
> again in the near future.

just doing so would not impair responsiveness. I have checked a bit
further on
my initial post and it happened that all of the cache was filled with
pages
that were never reused (as should it should be), but of the swapped
pages
a lot had to be swapped in during the run of the test because the
applications
happened to be at sleep for just a tad too long to stay in memory (they
were actually waiting for some IO to happen that was slow because of the
tar going on and got swapped which was slow because of the IO and then
swapped in because of the IO ...)

>
> Jim
>
> (Former kernel hacker on a now-defunct operating system that was ahead of
> its time...)
Well I would say that this OS had the right idea. Why not implement such
a scheme for linux. This sounds like a fun project for my spare time, so
count me in on this...
Btw, wasn't PRIMOS a more or less a real time OS which was abused by
some companies to do accounting (which sucked because transactions were
a pain in the neck on these machines)? I remember faintly working with
such machines about a decade ago...

greetings
Karl Günter Wünsch

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu