Firstly, this patch (to be squashed into previous) is trying to document
page_vma_mapped_walk() on why it's not needed to further take any lock to
call hugetlb_walk().
To call hugetlb_walk() we need either of the locks listed below (in either
read or write mode), according to the rules we setup for it in patch 3:
(1) hugetlb vma lock
(2) i_mmap_rwsem lock
page_vma_mapped_walk() is called in below sites across the kernel:
__replace_page[179] if (!page_vma_mapped_walk(&pvmw))
__damon_pa_mkold[24] while (page_vma_mapped_walk(&pvmw)) {
__damon_pa_young[97] while (page_vma_mapped_walk(&pvmw)) {
write_protect_page[1065] if (!page_vma_mapped_walk(&pvmw))
remove_migration_pte[179] while (page_vma_mapped_walk(&pvmw)) {
page_idle_clear_pte_refs_one[56] while (page_vma_mapped_walk(&pvmw)) {
page_mapped_in_vma[318] if (!page_vma_mapped_walk(&pvmw))
folio_referenced_one[813] while (page_vma_mapped_walk(&pvmw)) {
page_vma_mkclean_one[958] while (page_vma_mapped_walk(pvmw)) {
try_to_unmap_one[1506] while (page_vma_mapped_walk(&pvmw)) {
try_to_migrate_one[1881] while (page_vma_mapped_walk(&pvmw)) {
page_make_device_exclusive_one[2205] while (page_vma_mapped_walk(&pvmw)) {
If we group them, we can see that most of them are during a rmap walk
(i.e., comes from a higher rmap_walk() stack), they are:
__damon_pa_mkold[24] while (page_vma_mapped_walk(&pvmw)) {
__damon_pa_young[97] while (page_vma_mapped_walk(&pvmw)) {
remove_migration_pte[179] while (page_vma_mapped_walk(&pvmw)) {
page_idle_clear_pte_refs_one[56] while (page_vma_mapped_walk(&pvmw)) {
page_mapped_in_vma[318] if (!page_vma_mapped_walk(&pvmw))
folio_referenced_one[813] while (page_vma_mapped_walk(&pvmw)) {
page_vma_mkclean_one[958] while (page_vma_mapped_walk(pvmw)) {
try_to_unmap_one[1506] while (page_vma_mapped_walk(&pvmw)) {
try_to_migrate_one[1881] while (page_vma_mapped_walk(&pvmw)) {
page_make_device_exclusive_one[2205] while (page_vma_mapped_walk(&pvmw)) {
Let's call it case (A).
We have another two special cases that are not during a rmap walk, they
are:
write_protect_page[1065] if (!page_vma_mapped_walk(&pvmw))
__replace_page[179] if (!page_vma_mapped_walk(&pvmw))
Let's call it case (B).
Case (A) is always safe because it always take the i_mmap_rwsem lock in
read mode. It's done in rmap_walk_file() where:
if (!locked) {
if (i_mmap_trylock_read(mapping))
goto lookup;
if (rwc->try_lock) {
rwc->contended = true;
return;
}
i_mmap_lock_read(mapping);
}
If locked==true it means the caller already holds the lock, so no need to
take it. It justifies that all callers from rmap_walk() upon a hugetlb vma
is safe to call hugetlb_walk() already according to the rule of hugetlb_walk().
Case (B) contains two cases either in KSM path or uprobe path, and none of
the paths (afaict) can get a hugetlb vma involved. IOW, the whole path of
if (unlikely(is_vm_hugetlb_page(vma))) {
In page_vma_mapped_walk() just should never trigger.
To summarize above into a shorter paragraph, it'll become the comment.
Hope it explains. Thanks.