Re: [PATCH 2/6] mm: mlocking in try_to_unmap_one

From: Hugh Dickins
Date: Wed Nov 11 2009 - 06:36:24 EST


On Wed, 11 Nov 2009, KOSAKI Motohiro wrote:

Though it doesn't quite answer your question,
I'll just reinsert the last paragraph of my description here...

> > try_to_unmap_file()'s TTU_MUNLOCK nonlinear handling was particularly
> > amusing: once unravelled, it turns out to have been choosing between
> > two different ways of doing the same nothing. Ah, no, one way was
> > actually returning SWAP_FAIL when it meant to return SWAP_SUCCESS.

...
> > @@ -1081,45 +1053,23 @@ static int try_to_unmap_file(struct page
...
> >
> > - if (list_empty(&mapping->i_mmap_nonlinear))
> > + /* We don't bother to try to find the munlocked page in nonlinears */
> > + if (MLOCK_PAGES && TTU_ACTION(flags) == TTU_MUNLOCK)
> > goto out;
>
> I have dumb question.
> Does this shortcut exiting code makes any behavior change?

Not dumb. My intention was to make no behaviour change with any of
this patch; but in checking back before completing the description,
I suddenly realized that that shortcut intentionally avoids the

if (max_nl_size == 0) { /* all nonlinears locked or reserved ? */
ret = SWAP_FAIL;
goto out;
}

(which doesn't show up in the patch: you'll have to look at rmap.c),
which used to have the effect of try_to_munlock() returning SWAP_FAIL
in the case when there were one or more VM_NONLINEAR vmas of the file,
but none of them (and none of the covering linear vmas) VM_LOCKED.

That should have been a SWAP_SUCCESS case, or with my changes
another SWAP_AGAIN, either of which would make munlock_vma_page()
count_vm_event(UNEVICTABLE_PGMUNLOCKED);
which would be correct; but the SWAP_FAIL meant that count was not
incremented in this case.

Actually, I've double-fixed that, because I also changed
munlock_vma_page() to increment the count whenever ret != SWAP_MLOCK;
which seemed more appropriate, but would have been a no-op if
try_to_munlock() only returned SWAP_SUCCESS or SWAP_AGAIN or SWAP_MLOCK
as it claimed.

But I wasn't very inclined to boast of fixing that bug, since my testing
didn't give confidence that those /proc/vmstat unevictable_pgs_*lock*
counts are being properly maintained anyway - when I locked the same
pages in two vmas then unlocked them in both, I ended up with mlocked
bigger than munlocked (with or without my 2/6 patch); which I suspect
is wrong, but rather off my present course towards KSM swapping...

Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/