Re: [PATCH] mm: hwpoison: deal with page cache THP

From: Yang Shi
Date: Fri Aug 27 2021 - 01:02:39 EST


On Thu, Aug 26, 2021 at 8:57 PM HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@xxxxxxx> wrote:
>
> On Thu, Aug 26, 2021 at 03:03:57PM -0700, Yang Shi wrote:
> > On Thu, Aug 26, 2021 at 1:03 PM Yang Shi <shy828301@xxxxxxxxx> wrote:
> > >
> > > On Wed, Aug 25, 2021 at 11:17 PM HORIGUCHI NAOYA(堀口 直也)
> > > <naoya.horiguchi@xxxxxxx> wrote:
> > > >
> > > > On Tue, Aug 24, 2021 at 03:13:22PM -0700, Yang Shi wrote:
> ...
> > > >
> > > > There was a discussion about another approach of keeping error pages in page
> > > > cache for filesystem without backend storage.
> > > > https://lore.kernel.org/lkml/alpine.LSU.2.11.2103111312310.7859@eggly.anvils/
> > > > This approach seems to me less complicated, but one concern is that this
> > > > change affects user-visible behavior of memory errors. Keeping error pages
> > > > in page cache means that the errors are persistent until next system reboot,
> > > > so we might need to define the way to clear the errors to continue to use
> > > > the error file. Current implementation is just to send SIGBUS to the
> > > > mapping processes (at least once), then forget about the error, so there is
> > > > no such issue.
> > > >
> > > > Another thought of possible solution might be to send SIGBUS immediately when
> > > > a memory error happens on a shmem thp. We can find all the mapping processes
> > > > before splitting shmem thp, so send SIGBUS first, then split it and contain
> > > > the error page. This is not elegant (giving up any optional actions) but
> > > > anyway we can avoid the silent data lost.
> > >
> > > Thanks a lot. I apologize I didn't notice you already posted a similar
> > > patch before.
> > >
> > > Yes, I think I focused on the soft offline part too much and missed
> > > the uncorrected error part and I admit I did underestimate the
> > > problem.
> > >
> > > I think Hugh's suggestion makes sense if we treat tmpfs as a regular
> > > filesystem (just memory backed). AFAIK, some filesystem, e.g. btrfs,
> > > may do checksum after reading from storage block then return an error
> > > if checksum is not right since it may indicate hardware failure on
> > > disk. Then the syscalls or page fault return error or SIGBUS.
> > >
> > > So in shmem/tmpfs case, if hwpoisoned page is met, just return error
> > > (-EIO or whatever) for syscall or SIGBUS for page fault. It does align
> > > with the behavior of other filesystems. It is definitely applications'
> > > responsibility to check the return value of read/write syscalls.
> >
> > BTW, IIUC the dirty regular page cache (storage backed) would be left
> > in the page cache too, the clean page cache would be truncated since
> > they can be just reread from storage, right?
>
> A dirty page cache is also removed on error (me_pagecache_dirty() falls
> through me_pagecache_clean(), then truncate_error_page() is called).
> The main purpose of this is to separate off the error page from exising
> data structures to minimize the risk of later accesses (maybe by race or bug).
> But we can change this behavior for specific file systems by updating
> error_remove_page() callbacks in address_space_operation.

Yeah, if fs's error_remove_page() is defined. It seems the filesystems
which have error_remove_page() defined just use generic_remove_page()
except hugetlbfs. And the generic implementation just clears the dirty
flag and removes the page from page cache.

If error_remove_page() is not defined, the page would stay in page
cache since invalidate_inode_page() can't remove dirty page.

>
> Honestly, it seems to me that how dirty data is lost does not depend on
> file system, and I'm still not sure that this is really a right approach
> for the current issue.

IMHO the biggest problem is that applications may see
obsolete/inconsistent data silently, right? Actually keeping the
corrupted page in page cache should be able to notify applications
that they are accessing inconsistent data.

>
> Thanks,
> Naoya Horiguchi