Re: [RFC PATCH 1/2] mm, vmscan: account the number of isolated pages per zone

From: Michal Hocko
Date: Tue Feb 21 2017 - 04:40:53 EST


On Fri 03-02-17 19:57:39, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Mon 30-01-17 09:55:46, Michal Hocko wrote:
> > > On Sun 29-01-17 00:27:27, Tetsuo Handa wrote:
> > [...]
> > > > Regarding [1], it helped avoiding the too_many_isolated() issue. I can't
> > > > tell whether it has any negative effect, but I got on the first trial that
> > > > all allocating threads are blocked on wait_for_completion() from flush_work()
> > > > in drain_all_pages() introduced by "mm, page_alloc: drain per-cpu pages from
> > > > workqueue context". There was no warn_alloc() stall warning message afterwords.
> > >
> > > That patch is buggy and there is a follow up [1] which is not sitting in the
> > > mmotm (and thus linux-next) yet. I didn't get to review it properly and
> > > I cannot say I would be too happy about using WQ from the page
> > > allocator. I believe even the follow up needs to have WQ_RECLAIM WQ.
> > >
> > > [1] http://lkml.kernel.org/r/20170125083038.rzb5f43nptmk7aed@xxxxxxxxxxxxxxxxxxx
> >
> > Did you get chance to test with this follow up patch? It would be
> > interesting to see whether OOM situation can still starve the waiter.
> > The current linux-next should contain this patch.
>
> So far I can't reproduce problems except two listed below (cond_resched() trap
> in printk() and IDLE priority trap are excluded from the list).

OK, so it seems that all the distractions are handled now and linux-next
should provide a reasonable base for testing. You said you weren't able
to reproduce the original long stalls on too_many_isolated(). I would be
still interested to see those oom reports and potential anomalies in the
isolated counts before I send the patch for inclusion so your further
testing would be more than appreciated. Also stalls > 10s without any
previous occurrences would be interesting.

Thanks!
--
Michal Hocko
SUSE Labs