Re: [PATCH] vmscan: check all_unreclaimable in direct reclaim path

From: Minchan Kim
Date: Sun Sep 12 2010 - 12:21:00 EST


Thanks, Dave.

On Fri, Sep 10, 2010 at 5:24 PM, Dave Young <hidave.darkstar@xxxxxxxxx> wrote:
> On Thu, Sep 9, 2010 at 6:19 AM, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
>> On Thu, 9 Sep 2010 00:45:27 +0900
>> Minchan Kim <minchan.kim@xxxxxxxxx> wrote:
>>
>>> +static inline bool zone_reclaimable(struct zone *zone)
>>> +{
>>> +     return zone->pages_scanned < zone_reclaimable_pages(zone) * 6;
>>> +}
>>> +
>>> +static inline bool all_unreclaimable(struct zonelist *zonelist,
>>> +             struct scan_control *sc)
>>> +{
>>> +     struct zoneref *z;
>>> +     struct zone *zone;
>>> +     bool all_unreclaimable = true;
>>> +
>>> +     if (!scanning_global_lru(sc))
>>> +             return false;
>>> +
>>> +     for_each_zone_zonelist_nodemask(zone, z, zonelist,
>>> +                     gfp_zone(sc->gfp_mask), sc->nodemask) {
>>> +             if (!populated_zone(zone))
>>> +                     continue;
>>> +             if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL))
>>> +                     continue;
>>> +             if (zone_reclaimable(zone)) {
>>> +                     all_unreclaimable = false;
>>> +                     break;
>>> +             }
>>> +     }
>>> +
>>>       return all_unreclaimable;
>>>  }
>>
>> Could we have some comments over these functions please?  Why they
>> exist, what problem they solve, how they solve them, etc.  Stuff which
>> will be needed for maintaining this code three years from now.
>>
>> We may as well remove the `inline's too.  gcc will tkae care of that.
>>
>>> -             if (nr_slab == 0 &&
>>> -                zone->pages_scanned >= (zone_reclaimable_pages(zone) * 6))
>>> +             if (nr_slab == 0 && !zone_reclaimable(zone))
>>
>> Extra marks for working out and documenting how we decided on the value
>> of "6".  Sigh.  It's hopefully in the git record somewhere.
>
> Here it is (necessary to add additional comment?):
>
> commit 4ff1ffb4870b007b86f21e5f27eeb11498c4c077
> Author: Nick Piggin <npiggin@xxxxxxx>
> Date:   Mon Sep 25 23:31:28 2006 -0700
>
>    [PATCH] oom: reclaim_mapped on oom
>
>    Potentially it takes several scans of the lru lists before we can even start
>    reclaiming pages.
>
>    mapped pages, with young ptes can take 2 passes on the active list + one on
>    the inactive list.  But reclaim_mapped may not always kick in
> instantly, so it
>    could take even more than that.
>
>    Raise the threshold for marking a zone as all_unreclaimable from a
> factor of 4
>    time the pages in the zone to 6.  Introduce a mechanism to force
>    reclaim_mapped if we've reached a factor 3 and still haven't made progress.
>
>    Previously, a customer doing stress testing was able to easily OOM the box
>    after using only a small fraction of its swap (~100MB).  After the
> patches, it
>    would only OOM after having used up all swap (~800MB).
>
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>
>>
>>
>
>
>
> --
> Regards
> dave
>



--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/