Re: [PATCH 0/2] Fix SEV user-space mapping of unencrypted coherent memory

From: Thomas HellstrÃm (VMware)
Date: Wed Sep 11 2019 - 04:07:19 EST


On 9/11/19 7:59 AM, Ingo Molnar wrote:
* Thomas HellstrÃm (VMware) <thomas_os@xxxxxxxxxxxx> wrote:

With SEV and sometimes with SME encryption, The dma api coherent memory is
typically unencrypted, meaning the linear kernel map has the encryption
bit cleared. However, default page protection returned from vm_get_page_prot()
has the encryption bit set. So to compute the correct page protection we need
to clear the encryption bit.

Also, in order for the encryption bit setting to survive across do_mmap() and
mprotect_fixup(), We need to make pgprot_modify() aware of it and not touch it.
Therefore make sme_me_mask part of _PAGE_CHG_MASK and make sure
pgprot_modify() preserves also cleared bits that are part of _PAGE_CHG_MASK,
not just set bits. The use of pgprot_modify() is currently quite limited and
easy to audit.

(Note that the encryption status is not logically encoded in the pfn but in
the page protection even if an address line in the physical address is used).

The patchset has seen some sanity testing by exporting dma_pgprot() and
using it in the vmwgfx mmap handler with SEV enabled.

Changes since:
RFC:
- Make sme_me_mask port of _PAGE_CHG_MASK rather than using it by its own in
pgprot_modify().
Could you please add a "why is this patch-set needed", not just describe
the "what does this patch set do"? I've seen zero discussion in the three
changelogs of exactly why we'd want this, which drivers and features are
affected and in what way, etc.

It's called a "fix" but doesn't explain what bad behavior it fixes.

Thanks,

Ingo

I'll update the changelog to be more clear about that.

Thanks,

Thomas