Re: [v2 PATCH] mm: shmem: allow split THP when truncating THP partially

From: Hugh Dickins
Date: Tue Feb 25 2020 - 15:31:33 EST


On Tue, 25 Feb 2020, David Hildenbrand wrote:
> >
> > I notice that this thread has veered off into QEMU ballooning
> > territory: which may indeed be important, but there's nothing at all
> > that I can contribute on that. I certainly do not want to slow down
> > anything important, but remain convinced that the correct filesystem
> > implementation for punching a hole is to punch a hole.
>
> I am not completely sure I follow all the shmem details (sorry!). But
> trying to "punch a partial hole punch" into a hugetlbfs page will result
> in the very same behavior as with shmem as of now, no?

I believe so.

>
> FALLOC_FL_PUNCH_HOLE: "Within the specified range, partial filesystem
> blocks are zeroed, and whole filesystem blocks are removed from the
> file." ... After a successful call, subsequent reads from this range
> will return zeros."
>
> So, as long as we are talking about partial blocks the documented
> behavior seems to be to only zero the memory.
>
> Does this patch fix "FALLOC_FL_PUNCH_HOLE does not free blocks if called
> in block granularity on shmem" (which would be a valid fix),

Yes. The block size of tmpfs is (talking x86_64 for simplicity) 4KiB;
but when mounted huge, it transparently takes advantage of 2MiB extents
when it can. Rather like a disk-based filesystem always presenting a
4KiB block interface, but stored on disk in multisector extents.

Whereas hugetlbfs is a different filesystem, which is and always has
been limited to supporting only certain larger block sizes.

> or does it
> try to implement something that is not documented? (removing partial
> blocks when called in sub-block granularity)

No.

>
> I assume the latter, in which case I would interpret "punching a hole is
> to punch a hole" as "punching sub-blocks will not free blocks".
>
> (if somebody could enlighten me which important piece I am missing or
> messing up, that would be great :) )
>
> --
> Thanks,
>
> David / dhildenb