Re: [PATCH v5 1/3] drm/shmem: add support for per object caching flags.

From: Qiang Yu
Date: Sun Mar 01 2020 - 20:57:14 EST


On Thu, Feb 27, 2020 at 9:21 PM Gerd Hoffmann <kraxel@xxxxxxxxxx> wrote:
>
> Hi,
>
> > > So I'd like to push patches 1+2 to -fixes and sort everything else later
> > > in -next. OK?
> >
> > OK with me.
>
> Done.
>
> >> [ context: why shmem helpers use pgprot_writecombine + pgprot_decrypted?
> >> we get conflicting mappings because of that, linear kernel
> >> map vs. gem object vmap/mmap ]
>
> > Do we have any idea what drivers are actually using
> > write-combine and decrypted?
>
> drivers/gpu/drm# find -name Kconfig* -print | xargs grep -l DRM_GEM_SHMEM_HELPER
> ./lima/Kconfig
> ./tiny/Kconfig
> ./cirrus/Kconfig
> ./Kconfig
> ./panfrost/Kconfig
> ./udl/Kconfig
> ./v3d/Kconfig
> ./virtio/Kconfig
>
> virtio needs cached.
> cirrus+udl should be ok with cached too.
>
> Not clue about the others (lima, tiny, panfrost, v3d). Maybe they use
> write-combine just because this is what they got by default from
> drm_gem_mmap_obj(). Maybe they actually need that. Trying to Cc:
> maintainters (and drop stable@).
>
lima driver needs writecombine mapped buffer for GPU hardware
working properly due to CPU-GPU not cache coherent. Although we
haven't meet problems caused by kernel/user map conflict, but I
do admit it's a problem as I met with amdgpu+arm before.

With TTM we can control page allocated and create a pool for
writecombine pages, but seems shmem is not friendly with
writecombine pages.

Thanks,
Qiang