Re: [PATCH] memcg: fix stale swap cache leak v5

From: KAMEZAWA Hiroyuki
Date: Thu Apr 30 2009 - 05:49:23 EST


On Thu, 30 Apr 2009 15:12:52 +0530
Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> wrote:

> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> [2009-04-30 18:04:26]:
>
> > On Thu, 30 Apr 2009 16:35:39 +0900
> > KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> >
> > > On Thu, 30 Apr 2009 16:16:27 +0900
> > > KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> > >
> > > > This is v5 but all codes are rewritten.
> > > >
> > > > After this patch, when memcg is used,
> > > > 1. page's swapcount is checked after I/O (without locks). If the page is
> > > > stale swap cache, freeing routine will be scheduled.
> > > > 2. vmscan.c calls try_to_free_swap() when __remove_mapping() fails.
> > > >
> > > > Works well for me. no extra resources and no races.
> > > >
> > > > Because my office will be closed until May/7, I'll not be able to make a
> > > > response. Posting this for showing what I think of now.
> > > >
> > > I found a hole immediately after posted this...sorry. plz ignore this patch/
> > > see you again in the next month.
> > >
> > I'm now wondering to disable "swapin readahed" completely when memcg is used...
> > Then, half of the problems will go away immediately.
> > And it's not so bad to try to free swapcache if swap writeback ends. Then, another
> > half will go away...
> >
>
> Could you clarify? Will memcg not account for swapin readahead pages?
>
swapin-readahead pages are _not_ accounted now. (And I think _never_)
But has race and leak swp_entry account until global LRU runs.

"Don't do swapin-readahead, at all" will remove following race completely.
==
CPU0 CPU1
free_swap_and_cache()
read_swapcache_async()
==
swp_entry to be freed will not be read-in.

I think there will no performance regression in _usual_ case even if no readahead.
But has no number yet.


Thanks,
-Kame



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/