Re: Deadlock possibly caused by too_many_isolated.

From: Wu Fengguang
Date: Tue Sep 14 2010 - 23:13:57 EST


On Wed, Sep 15, 2010 at 11:06:40AM +0800, Wu Fengguang wrote:
> On Wed, Sep 15, 2010 at 10:54:54AM +0800, Wu Fengguang wrote:
> > On Wed, Sep 15, 2010 at 10:37:35AM +0800, Wu Fengguang wrote:
> > > On Wed, Sep 15, 2010 at 10:23:34AM +0800, Neil Brown wrote:
> > > > On Tue, 14 Sep 2010 20:30:18 -0400
> > > > Rik van Riel <riel@xxxxxxxxxx> wrote:
> > > >
> > > > > On 09/14/2010 07:11 PM, Neil Brown wrote:
> > > > >
> > > > > > Index: linux-2.6.32-SLE11-SP1/mm/vmscan.c
> > > > > > ===================================================================
> > > > > > --- linux-2.6.32-SLE11-SP1.orig/mm/vmscan.c 2010-09-15 08:37:32.000000000 +1000
> > > > > > +++ linux-2.6.32-SLE11-SP1/mm/vmscan.c 2010-09-15 08:38:57.000000000 +1000
> > > > > > @@ -1106,6 +1106,11 @@ static unsigned long shrink_inactive_lis
> > > > > > /* We are about to die and free our memory. Return now. */
> > > > > > if (fatal_signal_pending(current))
> > > > > > return SWAP_CLUSTER_MAX;
> > > > > > + if (!(sc->gfp_mask& __GFP_IO))
> > > > > > + /* Not allowed to do IO, so mustn't wait
> > > > > > + * on processes that might try to
> > > > > > + */
> > > > > > + return SWAP_CLUSTER_MAX;
> > > > > > }
> > > > > >
> > > > > > /*
> > > > >
> > > > > Close. We must also be sure that processes without __GFP_FS
> > > > > set in their gfp_mask do not wait on processes that do have
> > > > > __GFP_FS set.
> > > > >
> > > > > Considering how many times we've run into a bug like this,
> > > > > I'm kicking myself for not having thought of it :(
> > > > >
> > > >
> > > > So maybe this? I've added the test for __GFP_FS, and moved the test before
> > > > the congestion_wait on the basis that we really want to get back up the stack
> > > > and try the mempool ASAP.
> > >
> > > The patch may well fail the !__GFP_IO page allocation and then
> > > quickly exhaust the mempool.
> > >
> > > Another approach may to let too_many_isolated() use much higher
> > > thresholds for !__GFP_IO/FS and lower ones for __GFP_IO/FS. ie. to
> > > allow at least nr2 NOIO/FS tasks to be blocked independent of the
> > > IO/FS ones. Since NOIO vmscans typically completes fast, it will then
> > > very hard to accumulate enough NOIO processes to be actually blocked.
> > >
> > >
> > > IO/FS tasks NOIO/FS tasks full
> > > block here block here LRU size
> > > |-----------------|--------------------------|-----------------------|
> > > | nr1 | nr2 |
> >
> > How about this fix? We may need very high threshold for NOIO/NOFS to
> > prevent possible regressions.
>
> Plus __GFP_WAIT..

Ah sorry! __GFP_WAIT cannot afford to wait by definition..

---
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 225a759..becc63a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1135,10 +1135,14 @@ static int too_many_isolated(struct zone *zone, int file,
struct scan_control *sc)
{
unsigned long inactive, isolated;
+ int ratio;

if (current_is_kswapd())
return 0;

+ if (!(sc->gfp_mask & __GFP_WAIT))
+ return 0;
+
if (!scanning_global_lru(sc))
return 0;

@@ -1150,7 +1154,9 @@ static int too_many_isolated(struct zone *zone, int file,
isolated = zone_page_state(zone, NR_ISOLATED_ANON);
}

- return isolated > inactive;
+ ratio = sc->gfp_mask & (__GFP_IO | __GFP_FS) ? 1 : 8;
+
+ return isolated > inactive * ratio;
}

/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/