Re: [BUGFIX][mm][PATCH] fix migration race in rmap_walk
From: KAMEZAWA Hiroyuki
Date: Sun Apr 25 2010 - 22:58:13 EST
On Mon, 26 Apr 2010 08:49:01 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> On Sat, 24 Apr 2010 11:43:24 +0100
> Mel Gorman <mel@xxxxxxxxx> wrote:
> > It looks nice but it still broke after 28 hours of running. The
> > seq-counter is still insufficient to catch all changes that are made to
> > the list. I'm beginning to wonder if a) this really can be fully safely
> > locked with the anon_vma changes and b) if it has to be a spinlock to
> > catch the majority of cases but still a lazy cleanup if there happens to
> > be a race. It's unsatisfactory and I'm expecting I'll either have some
> > insight to the new anon_vma changes that allow it to be locked or Rik
> > knows how to restore the original behaviour which as Andrea pointed out
> > was safe.
> Ouch. Hmm, how about the race in fork() I pointed out ?
Forget this. Sorry for noise.
This is a memo for myself.
*) at fork, when copying a vma for file, vma_prio_tree_add() is called
before copying page tables.
There are several patterns.
Assume tasks named as t1,t2,t3,t4,t5 and their own vmas v1,v2,v3,v4,v5 which map
a range in address spaces.
(a) t1 forks t2.
v1 is in prio_tree, v2 for t2 will be pointed by ->head pointer.
v1 --(head)---> v2
vma_prio_tree_foreach() order : v1->v2.
(b) after (a), t2 forks t3. (list_add() is used.)
v1 --(head)--> v2 ->(list.next)->v3
vma_prio_tree_foreach() order : v1->v2->v3
(c) after (b), t1 forks t4.
v1 --(head)--> v2 ->(list.next)->v3->v4
vma_prio_tree_foreach() order : v1->v2->v3->v4
(d) after (c), t4 forks t5.
v1 --(head)--> v2 ->(list.next)->v3->v4->v5
vma_prio_tree_foreach() order : v1->v2->v3->v4->v5
(e) after (c), t3 forks t5.
v1 --(head)--> v2 ->(list.next)->v3->v5->v4-
vma_prio_tree_foreach() order : v1->v2->v3->v5->v4
.....in any case, it seems vma_prio_tree_foreach() finds
the parent's vma 1st.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/