Re: [PATCH] mm,vmscan: fix divide by zero in get_scan_count

From: Rik van Riel
Date: Tue Aug 31 2021 - 11:48:38 EST


On Tue, 2021-08-31 at 11:59 +0200, Michal Hocko wrote:
> On Mon 30-08-21 16:48:03, Johannes Weiner wrote:
>
>
> > Or go back to not taking the branch in the first place when there
> > is
> > no protection in effect...
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 6247f6f4469a..9c200bb3ae51 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2547,7 +2547,7 @@ static void get_scan_count(struct lruvec
> > *lruvec, struct scan_control *sc,
> >                 mem_cgroup_protection(sc->target_mem_cgroup, memcg,
> >                                       &min, &low);
> >  
> > -               if (min || low) {
> > +               if (min || (!sc->memcg_low_reclaim && low)) {
> >                         /*
> >                          * Scale a cgroup's reclaim pressure by
> > proportioning
> >                          * its current usage to its memory.low or
> > memory.min
>
> This is slightly more complex to read but it is also better than +1
> trick.

We could also fold it into the helper function, with
mem_cgroup_protection deciding whether to use low or
min for the protection limit, and then we key the rest
of our decisions off that.

Wait a minute, that's pretty much what mem_cgroup_protection
looked like before f56ce412a59d ("mm: memcontrol: fix occasional
OOMs due to proportional memory.low reclaim")

Now I'm confused how that changeset works.

Before f56ce412a59d, mem_cgroup_protection would return
memcg->memory.emin if sc->memcg_low_reclaim is true, and
memcg->memory.elow when not.

After f56ce412a59d, we still do the same thing. We just
also set sc->memcg_low_skipped so we know to come back
if we could not hit our target without skipping groups
with memory.low protection...

--
All Rights Reversed.

Attachment: signature.asc
Description: This is a digitally signed message part