Re: [RFC] - Some notions that I would like comments on

Jamie Lokier (lkd@tantalophile.demon.co.uk)
Sun, 18 Jul 1999 21:07:12 +0200


Chuck Lever wrote:
> as i understand it, when a page fault occurs and the requested page isn't
> already in the page cache, the whole cluster is read in. however, the
> read operations are non-blocking -- after all the reads are scheduled,
> filemap_nopage waits for the specific requested page.
>
> so, if you want a page fault to trigger the next cluster too, a way of
> doing that easily with the current code base is to schedule all the reads
> for the current cluster, then schedule all the reads for the next cluster,
> then wait for the requested page. that's almost identical to doubling the
> cluster size.
>
> however, if the cluster size is 128k, and the requested page is in the
> second half of the cluster, then you've "read behind." on the other hand
> by triggering the next cluster, you have 128k of potentially more
> interesting data, since more fresh data is likely to be ahead of the
> current page request.
>
> i may have missed your point, though.

>From this & your next message I think you have :)

It's not about the size of clusters or having potentially more data in
cache already. It's about having _all_ the data in cache before it's
needed no matter how large the file.

This is done by tracking soft page faults -- when a page that is
_already_ in cache is mapped into the process on a fault. In other
words, the found_page case is the interesting one here. no_cached_page
is also interesting but is never reached once sequential readahead
reaches a steady state!

The net result is once the readahead window opens up enough,
filemap_nopage never waits on I/O, not even once per cluster. With the
code in place it might even be worth using readahead that's smaller than
the random access readaround cluster size. More I/Os, less memory used.

-- Jamie

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/