Re: [PATCH 0/3] OOM detection rework v4

From: Michal Hocko
Date: Fri Mar 04 2016 - 10:16:08 EST


On Fri 04-03-16 14:23:27, Joonsoo Kim wrote:
> On Thu, Mar 03, 2016 at 04:25:15PM +0100, Michal Hocko wrote:
> > On Thu 03-03-16 23:10:09, Joonsoo Kim wrote:
> > > 2016-03-03 18:26 GMT+09:00 Michal Hocko <mhocko@xxxxxxxxxx>:
[...]
> > > >> I guess that usual case for high order allocation failure has enough freepage.
> > > >
> > > > Not sure I understand you mean here but I wouldn't be surprised if high
> > > > order failed even with enough free pages. And that is exactly why I am
> > > > claiming that reclaiming more pages is no free ticket to high order
> > > > pages.
> > >
> > > I didn't say that it's free ticket. OOM kill would be the most expensive ticket
> > > that we have. Why do you want to kill something?
> >
> > Because all the attempts so far have failed and we should rather not
> > retry endlessly. With the band-aid we know we will retry
> > MAX_RECLAIM_RETRIES at most. So compaction had that many attempts to
> > resolve the situation along with the same amount of reclaim rounds to
> > help and get over watermarks.
> >
> > > It also doesn't guarantee to make high order pages. It is just another
> > > way of reclaiming memory. What is the difference between plain reclaim
> > > and OOM kill? Why do we use OOM kill in this case?
> >
> > What is our alternative other than keep looping endlessly?
>
> Loop as long as free memory or estimated available memory (free +
> reclaimable) increases. This means that we did some progress. And,
> they will not grow forever because we have just limited reclaimable
> memory and limited memory. You can reset no_progress_loops = 0 when
> those metric increases than before.

Hmm, why is this any better than taking the feedback from the reclaim
(did_some_progress)?

> With this bound, we can do our best to try to solve this unpleasant
> situation before OOM.
>
> Unconditional 16 looping and then OOM kill really doesn't make any
> sense, because it doesn't mean that we already do our best.

16 is not really that important. We can change that if that doesn't
sounds sufficient. But please note that each reclaim round means
that we have scanned all eligible LRUs to find and reclaim something
and asked direct compaction to prepare a high order page.
This sounds like "do our best" to me.

Now it seems that we need more changes at least in the compaction area
because the code doesn't seem to fit the nature of !costly allocation
requests. I am also not satisfied with the fixed MAX_RECLAIM_RETRIES for
high order pages, I would much rather see some feedback mechanism which
would measurable and evaluated in some way but is this really necessary
for the initial version?
--
Michal Hocko
SUSE Labs