In swap_out we have the following code:
spin_lock(&mmlist_lock);
mm = swap_mm;
while (mm->swap_address == TASK_SIZE || mm == &init_mm) {
mm->swap_address = 0;
mm = list_entry(mm->mmlist.next, struct mm_struct, mmlist);
if (mm == swap_mm)
goto empty;
swap_mm = mm;
}
/* Make sure the mm doesn't disappear when we drop the lock.. */
atomic_inc(&mm->mm_users);
spin_unlock(&mmlist_lock);
nr_pages = swap_out_mm(mm, nr_pages, &counter, classzone);
mmput(mm);
And looking in fork.c mmput under with right circumstances becomes.
kmem_cache_free(mm_cachep, (mm)))
So it appears that there is nothing that keeps the mm_struct that
swap_mm points to as being valid.
I guess the easy fix would be to increment the count on swap_mm,
and then do an mmput we assign something else to the value of swap_mm. But
I don't know if that is what we want.
Thoughts?
Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Fri Nov 23 2001 - 21:00:26 EST