Re: [PATCH v2 1/1] mm: fix folio_pte_batch() on XEN PV

From: David Hildenbrand
Date: Sun May 04 2025 - 02:48:02 EST


On 04.05.25 03:28, Andrew Morton wrote:
On Fri, 2 May 2025 23:50:19 +0200 Petr Vaněk <arkamar@xxxxxxxx> wrote:

On XEN PV, folio_pte_batch() can incorrectly batch beyond the end of a
folio due to a corner case in pte_advance_pfn(). Specifically, when the
PFN following the folio maps to an invalidated MFN,

expected_pte = pte_advance_pfn(expected_pte, nr);

produces a pte_none(). If the actual next PTE in memory is also
pte_none(), the pte_same() succeeds,

if (!pte_same(pte, expected_pte))
break;

the loop is not broken, and batching continues into unrelated memory.

...

Looks OK for now I guess but it looks like we should pay some attention
to what types we're using.
>> --- a/mm/internal.h
+++ b/mm/internal.h
@@ -248,11 +248,9 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
bool *any_writable, bool *any_young, bool *any_dirty)
{
- unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
- const pte_t *end_ptep = start_ptep + max_nr;
pte_t expected_pte, *ptep;
bool writable, young, dirty;
- int nr;
+ int nr, cur_nr;
if (any_writable)
*any_writable = false;
@@ -265,11 +263,15 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
VM_WARN_ON_FOLIO(page_folio(pfn_to_page(pte_pfn(pte))) != folio, folio);
+ /* Limit max_nr to the actual remaining PFNs in the folio we could batch. */
+ max_nr = min_t(unsigned long, max_nr,
+ folio_pfn(folio) + folio_nr_pages(folio) - pte_pfn(pte));
+

Methinks max_nr really wants to be unsigned long.

We only batch within a single PTE table, so an integer was sufficient.

The unsigned value is the result of a discussion with Ryan regarding similar/related
(rmap) functions:

"
Personally I'd go with signed int (since
that's what all the counters in struct folio that we are manipulating are,
underneath the atomic_t) then check that nr_pages > 0 in
__folio_rmap_sanity_checks().
"

https://lore.kernel.org/linux-mm/20231204142146.91437-14-david@xxxxxxxxxx/T/#ma0bfff0102f0f2391dfa94aa22a8b7219b92c957

As soon as we let "max_nr" be an "unsigned long", also the return value
should be an "unsigned long", and everybody calling that function.

In this case here, we should likely just use whatever type "max_nr" is.

Not sure myself if we should change that here to unsigned long or long. Some
callers also operate with the negative values IIRC (e.g., adjust the RSS by doing -= nr).

That will permit the
cleanup of quite a bit of truncation, extension, signedness conversion
and general type chaos in folio_pte_batch()'s various callers.
> And...

Why does folio_nr_pages() return a signed quantity? It's a count.

A partial answer is in 1ea5212aed068 ("mm: factor out large folio handling
from folio_nr_pages() into folio_large_nr_pages()"), where I stumbled over the
reason for a signed value myself and at least made the other
functions be consistent with folio_nr_pages():

"
While at it, let's consistently return a "long" value from all these
similar functions. Note that we cannot use "unsigned int" (even though
_folio_nr_pages is of that type), because it would break some callers that
do stuff like "-folio_nr_pages()". Both "int" or "unsigned long" would
work as well.

"

Note that folio_nr_pages() returned a "long" since the very beginning. Probably using
a signed value for consistency because also mapcounts / refcounts are all signed.



And why the heck is folio_pte_batch() inlined? It's larger then my
first hard disk and it has five callsites!

:)

In case of fork/zap we really want it inlined because

(1) We want to optimize out all of the unnecessary checks we added for other users

(2) Zap/fork code is very sensitive to function call overhead

Probably, as that function sees more widespread use, we might want a
non-inlined variant that can be used in places where performance doesn't
matter all that much (although I am not sure there will be that many).

--
Cheers,

David / dhildenb