Re: [RFC PATCH] drm/ttm: force cached mappings for system RAM on ARM

From: Koenig, Christian
Date: Mon Jan 14 2019 - 06:41:12 EST


Am 14.01.19 um 11:53 schrieb Ard Biesheuvel:
> On Thu, 10 Jan 2019 at 10:34, Michel DÃnzer <michel@xxxxxxxxxxx> wrote:
>> On 2019-01-10 8:28 a.m., Ard Biesheuvel wrote:
>>> ARM systems do not permit the use of anything other than cached
>>> mappings for system memory, since that memory may be mapped in the
>>> linear region as well, and the architecture does not permit aliases
>>> with mismatched attributes.
>>>
>>> So short-circuit the evaluation in ttm_io_prot() if the flags include
>>> TTM_PL_SYSTEM when running on ARM or arm64, and just return cached
>>> attributes immediately.
>>>
>>> This fixes the radeon and amdgpu [TBC] drivers when running on arm64.
>>> Without this change, amdgpu does not start at all, and radeon only
>>> produces corrupt display output.
>>>
>>> Cc: Christian Koenig <christian.koenig@xxxxxxx>
>>> Cc: Huang Rui <ray.huang@xxxxxxx>
>>> Cc: Junwei Zhang <Jerry.Zhang@xxxxxxx>
>>> Cc: David Airlie <airlied@xxxxxxxx>
>>> Reported-by: Carsten Haitzler <Carsten.Haitzler@xxxxxxx>
>>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
>>> ---
>>> drivers/gpu/drm/ttm/ttm_bo_util.c | 5 +++++
>>> 1 file changed, 5 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>> index 046a6dda690a..0c1eef5f7ae3 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>> @@ -530,6 +530,11 @@ pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t tmp)
>>> if (caching_flags & TTM_PL_FLAG_CACHED)
>>> return tmp;
>>>
>>> +#if defined(__arm__) || defined(__aarch64__)
>>> + /* ARM only permits cached mappings of system memory */
>>> + if (caching_flags & TTM_PL_SYSTEM)
>>> + return tmp;
>>> +#endif
>>> #if defined(__i386__) || defined(__x86_64__)
>>> if (caching_flags & TTM_PL_FLAG_WC)
>>> tmp = pgprot_writecombine(tmp);
>>>
>> Apart from Christian's concerns, I think this is the wrong place for
>> this, because other TTM / driver code will still consider the memory
>> uncacheable. E.g. the amdgpu driver will program the GPU to treat the
>> memory as uncacheable, so it won't participate in cache coherency
>> protocol for it, which is unlikely to work as expected in general if the
>> CPU treats the memory as cacheable.
>>
> Will and I have spent some time digging into this, so allow me to
> share some preliminary findings while we carry on and try to fix this
> properly.
>
> - The patch above is flawed, i.e., it doesn't do what it intends to
> since it uses TTM_PL_SYSTEM instead of TTM_PL_FLAG_SYSTEM. Apologies
> for that.
> - The existence of a linear region mapping with mismatched attributes
> is likely not the culprit here. (We do something similar with
> non-cache coherent DMA in other places).

This is still rather problematic.

The issue is that we often don't create a vmap for a page, but rather
access the page directly using the linear mapping.

So we would use the wrong access type here.

> - The reason remapping the CPU side as cacheable does work (which I
> did test) is because the GPU's uncacheable accesses (which I assume
> are made using the NoSnoop PCIe transaction attribute) are actually
> emitted as cacheable in some cases.
> . On my AMD Seattle, with or without SMMU (which is stage 2 only), I
> must use cacheable accesses from the CPU side or things are broken.
> This might be a h/w flaw, though.
> . On systems with stage 1+2 SMMUs, the driver uses stage 1
> translations which always override the memory attributes to cacheable
> for DMA coherent devices. This is what is affecting the Cavium
> ThunderX2 (although it appears the attributes emitted by the RC may be
> incorrect as well.)
>
> The latter issue is a shortcoming in the SMMU driver that we have to
> fix, i.e., it should take care not to modify the incoming attributes
> of DMA coherent PCIe devices for NoSnoop to be able to work.
>
> So in summary, the mismatch appears to be between the CPU accessing
> the vmap region with non-cacheable attributes and the GPU accessing
> the same memory with cacheable attributes, resulting in a loss of
> coherency and lots of visible corruption.

Actually it is the other way around. The CPU thinks some data is in the
cache and the GPU only updates the system memory version because the
snoop flag is not set.

> To be able to debug this further, could you elaborate a bit on
> - How does the hardware emit those uncached/wc inbound accesses? Do
> they rely on NoSnoop?

The GPU has a separate page walker in the MC and the page tables there
have a bits saying if the access should go to the PCIe bus and if yes if
the snoop bit should be set.

> - Christian pointed out that some accesses must be uncached even when
> not using WC. What kind of accesses are those? And do they access
> system RAM?

On some hardware generations we have a buggy engine which fails to
forward the snoop bit and because of this the system memory page used by
this engine must be uncached. But this only applies if you use ROCm in a
specific configuration.

Regards,
Christian.