This would be a good test case for any algorithm change. As long as there
are free memory pages, the buffer cache would continue to expand. Since
header files would be referenced more often, they would presumably have
priority over regular source files that were referenced only once; LRU
should handle that.
When free memory is no longer available, is it better to get rid of
previously compiled C program files in the buffer cache, or start paging
Another scenario: you are editing in Emacs and stop to think a few
minutes. The guy next to you decides to compile the kernel. Should the
buffer cache steal your Emacs text/data pages so that it can fill memory
with every C file in the kernel? My suggestion wasn't to throw out
sequentially accessed files as soon as possible. But going on a strictly
LRU algorithm, your Emacs pages would be gone and lots of C file buffers
that will never be used again would be in memory. Assuming there isn't
room for everything of course.
>The (obvious) point is that the usage frequency matters far more than
>the occasional repositioning. Unless of course, you are talking about
>adding a new flag to the open mode (like NT's FILE_FLAG_NO_BUFFERING or
>FILE_FLAG_SEQUENTIAL_SCAN or FILE_FLAG_RANDOM_ACCESS), rather than
>modifying the default behaviour. Check Albert Cahalan's wish list.
The flag sounds like a good idea, but setting it heuristically or
incrementing a counter in lseek would be more effective than a new open
Can you give me a pointer to Albert's wish list?
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com