Re: [PATCH v4 35/36] mm: Convert do_set_pte() to set_pte_range()

From: Yin Fengwei
Date: Tue Mar 21 2023 - 01:17:44 EST


On 3/20/23 22:08, Matthew Wilcox wrote:
On Mon, Mar 20, 2023 at 09:38:55PM +0800, Yin, Fengwei wrote:
Thanks a lot to Ryan for helping to test the debug patch I made.

Ryan confirmed that the following change could fix the kernel build regression:
diff --git a/mm/filemap.c b/mm/filemap.c
index db86e459dde6..343d6ff36b2c 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3557,7 +3557,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,

ret |= filemap_map_folio_range(vmf, folio,
xas.xa_index - folio->index, addr, nr_pages);
- xas.xa_index += nr_pages;
+ xas.xa_index += folio_test_large(folio) ? nr_pages : 0;

folio_unlock(folio);
folio_put(folio);

I will make upstream-able change as "xas.xa_index += nr_pages - 1;"

Thanks to both of you!

Really, we shouldn't need to interfere with xas.xa_index at all.
Does this work?
Yes. This works perfectly in my side. Thanks.

Regards
Yin, Fengwei


diff --git a/mm/filemap.c b/mm/filemap.c
index 8e4f95c5b65a..e40c967dde5f 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3420,10 +3420,10 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct folio *folio,
return false;
}
-static struct folio *next_uptodate_page(struct folio *folio,
- struct address_space *mapping,
- struct xa_state *xas, pgoff_t end_pgoff)
+static struct folio *next_uptodate_folio(struct xa_state *xas,
+ struct address_space *mapping, pgoff_t end_pgoff)
{
+ struct folio *folio = xas_next_entry(xas, end_pgoff);
unsigned long max_idx;
do {
@@ -3461,22 +3461,6 @@ static struct folio *next_uptodate_page(struct folio *folio,
return NULL;
}
-static inline struct folio *first_map_page(struct address_space *mapping,
- struct xa_state *xas,
- pgoff_t end_pgoff)
-{
- return next_uptodate_page(xas_find(xas, end_pgoff),
- mapping, xas, end_pgoff);
-}
-
-static inline struct folio *next_map_page(struct address_space *mapping,
- struct xa_state *xas,
- pgoff_t end_pgoff)
-{
- return next_uptodate_page(xas_next_entry(xas, end_pgoff),
- mapping, xas, end_pgoff);
-}
-
/*
* Map page range [start_page, start_page + nr_pages) of folio.
* start_page is gotten from start by folio_page(folio, start)
@@ -3552,7 +3536,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
int nr_pages = 0;
rcu_read_lock();
- folio = first_map_page(mapping, &xas, end_pgoff);
+ folio = next_uptodate_folio(&xas, mapping, end_pgoff);
if (!folio)
goto out;
@@ -3574,11 +3558,11 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
ret |= filemap_map_folio_range(vmf, folio,
xas.xa_index - folio->index, addr, nr_pages);
- xas.xa_index += nr_pages;
folio_unlock(folio);
folio_put(folio);
- } while ((folio = next_map_page(mapping, &xas, end_pgoff)) != NULL);
+ folio = next_uptodate_folio(&xas, mapping, end_pgoff);
+ } while (folio);
pte_unmap_unlock(vmf->pte, vmf->ptl);
out:
rcu_read_unlock();

Ryan and I also identify some other changes needed. I am not sure how to
integrate those changes to this series. Maybe an add-on patch after this
series? Thanks.

Up to you; I'm happy to integrate fixup patches into the current series
or add on new ones.