Hi David,
The 64 MiB limit is the theoretical upper bound that we have not seen hit in
On 12.06.23 09:10, Kasireddy, Vivek wrote:
Hi Mike,
Hi Vivek,
udmabuf
Sorry for the late reply; I just got back from vacation.
If it is unsafe to directly use the subpages of a hugetlb page, then reverting
this patch seems like the only option for addressing this issue immediately.
So, this patch is
Acked-by: Vivek Kasireddy <vivek.kasireddy@xxxxxxxxx>
As far as the use-case is concerned, there are two main users of the
driver: Qemu and CrosVM VMMs. However, it appears Qemu is the onlyone
that uses hugetlb pages (when hugetlb=on is set) as the backing store forend,
Guest (Linux, Android and Windows) system memory. The main goal is to
share the pages associated with the Guest allocated framebuffer (FB) with
the Host GPU driver and other components in a zero-copy way. To that
the guest GPU driver (virtio-gpu) allocates 4k size pages (associated withaddresses
the FB) and pins them before sharing the (guest) physical (or dma)
(and lengths) with Qemu. Qemu then translates the addresses into file
offsets and shares these offsets with udmabuf.
Is my understanding correct, that we can effectively long-term pin
(worse than mlock) 64 MiB per UDMABUF_CREATE, allowing eventually !root
practice. Typically, for a 1920x1080 resolution (commonly used in Guests),
the size of the FB is ~8 MB (1920x1080x4). And, most modern Graphics
compositors flip between two FBs.
usersRight, it does not look like the mlock limits are honored.
ll /dev/udmabuf
crw-rw---- 1 root kvm 10, 125 12. Jun 08:12 /dev/udmabuf
to bypass there effective MEMLOCK limit, fragmenting physical memory and
breaking swap?
mmap operation is really needed only if any component on the Host needs
Regarding the udmabuf_vm_fault(), I assume we're mapping pages we
obtained from the memfd ourselves into a special VMA (mmap() of the
CPU access to the buffer. But in most scenarios, we try to ensure direct GPU
access (h/w acceleration via gl) to these pages.
udmabuf). I'm not sure how well shmem pages are prepared for gettingMost drm/gpu drivers use shmem pages as the backing store for FBs and
mapped by someone else into an arbitrary VMA (page->index?).
other buffers and also provide mmap capability. What concerns do you see
with this approach?
IIUC, making use of the DMA_BUF_IOCTL_SYNC ioctl would help with any
... also, just imagine someone doing FALLOC_FL_PUNCH_HOLE / ftruncate()
on the memfd. What's mapped into the memfd no longer corresponds to
what's pinned / mapped into the VMA.
coherency issues:
https://www.kernel.org/doc/html/v6.2/driver-api/dma-buf.html#c.dma_buf_sync
It does not appear so from the link below although other key lists were cc'd:
Was linux-mm (and especially shmem maintainers, ccing Hugh) involved in
the upstreaming of udmabuf?
https://patchwork.freedesktop.org/patch/246100/?series=39879&rev=7