Hi John,For me, keeping the feature per-range instead of something like per-file is in order to be able to support Android's existing use case.
On Fri, Jul 27, 2012 at 8:57 PM, John Stultz <john.stultz@xxxxxxxxxx> wrote:So after not getting too much positive feedback on my lastAgree that there hasn't been much feedback from MM folks yet - sorry
attempt at trying to use a non-shrinker method for managing
& purging volatile ranges, I decided I'd go ahead and try
to implement something along Minchan's ERECLAIM LRU list
idea.
about that :/
I think one issue might be that most people don't have a good
background on how the feature is intended to be used, and it is very
difficult to comment meaningfully without that.
As for myself, I have been wondering:
- Why the feature needs to be on a per-range basis, rather than
per-file. Is this simply to make it easier to transition the android
use case from whatever they are doing right now, or is it that the
object boundaries within a file can't be known in advance, and thus
one wouldn't know how to split objects accross different files ? Or
could it be that some of the objects would be small (less than a page)
so space use would be inefficient if they were placed in different
files ? Or just that there would be too many files for efficient
management ?
- What are the desired semantics for the volatile objects. Can theSo accessing a volatile page before marking it non-volatile can produce undefined behavior. You could get the data that was there, or you could get empty pages. The expectation is that pages are unmarked before being accessed, so one can know if the data was lost or not. I'm open to other suggestions here, if folks think we should SIGSEGV on accesses to volatile pages. However, I don't know how setting that up and tearing it down on each mark_volatile/unmark_volatile might affect performance.
objects be accessed while they are marked as volatile, or do they have
to get unmarked first ?
Is it really the case that we always want toSo the current Android ashmem implementation uses a shrinker, which isn't necessarily called before any other caches are freed. So, I don't think its a strong hint, but it just seems somewhat intuitive to me that we should free effectively "user-donated" pages before freeing other system caches. But that's not something the interface necessarily defines or requires.
reclaim from volatile objects first, before any other kind of caches
we might have ? This sounds like a very strong hint, and I think I
would be more comfortable with something more subtle if that's
possible.
Also, if we have several volatile objects to reclaim from,While I don't think its strictly necessary, I do think LRU order purging is important from the least-surprise angle. Since ranges marked volatile should not be touched until they are marked non-volatile, it follows normal expectations that recently touched data is likely to be faster then data that has not been accessed for some time. Reasonable exceptions would be situations like NUMA systems where pressure on one node forces purging volatile pages in a non-global-lru order. So probably not critical, but I think useful to try to preserve.
is it desirable to reclaim from the one that's been marked volatile
the longest or does it make no difference ?
When an object is marked
volatile, would it be sufficient to ensure it gets placed on the
inactive list (maybe with the referenced bit cleared) and let the
normal reclaim algorithm get to it, or is that an insufficiently
strong hint somehow ?
Basically, having some background information of how android would beHopefully the details above help, and I'll try to get some more concrete examples from the Android code base.
using the feature would help us better understand the design decision
here, I think.