Re: [PATCH v2 1/2] mm/damon: validate if the pmd entry is present before accessing

From: Baolin Wang
Date: Sun Aug 21 2022 - 01:22:39 EST




On 8/21/2022 5:17 AM, Andrew Morton wrote:
On Thu, 18 Aug 2022 15:37:43 +0800 Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> wrote:

The pmd_huge() is used to validate if the pmd entry is mapped by a huge
page, also including the case of non-present (migration or hwpoisoned)
pmd entry on arm64 or x86 architectures. That means the pmd_pfn() can
not get the correct pfn number for the non-present pmd entry, which
will cause damon_get_page() to get an incorrect page struct (also
may be NULL by pfn_to_online_page()) to make the access statistics
incorrect.

Moreover it does not make sense that we still waste time to get the
page of the non-present entry, just treat it as not-accessed and skip it,
that keeps consistent with non-present pte level entry.

Thus adding a pmd entry present validation to fix above issues.


Do we have a Fixes: for this?

OK, should be:
Fixes: 3f49584b262c ("mm/damon: implement primitives for the virtual memory address spaces")

What are the user-visible runtime effects of the bug? "make the access
statistics incorrect" is rather vague.

"access statistics incorrect" means that the DAMON may make incorrect decision according to the incorrect statistics, for example, DAMON may can not reclaim cold page in time due to this cold page was regarded as accessed mistakenly if DAMOS_PAGEOUT operation is specified.

Do we feel that a cc:stable is warranted?

Though this is not a regular case, I think this patch is suitable to be backported to cover this unusual case. So please help to add a stable tag when you apply this patch, or please let me know if you want a new version with adding Fixes and stable tags. Thanks.