Re: [PATCHv3 14/17] x86/mm: Introduce direct_mapping_size

From: Dave Hansen
Date: Mon Jun 18 2018 - 09:22:10 EST


On 06/18/2018 06:12 AM, Kirill A. Shutemov wrote:
> On Wed, Jun 13, 2018 at 06:37:07PM +0000, Dave Hansen wrote:
>> On 06/12/2018 07:39 AM, Kirill A. Shutemov wrote:
>>> Kernel need to have a way to access encrypted memory. We are going to
>> "The kernel needs"...
>>
>>> use per-KeyID direct mapping to facilitate the access with minimal
>>> overhead.
>>
>> What are the security implications of this approach?
>
> I'll add this to the message:
>
> Per-KeyID mappings require a lot more virtual address space. On 4-level
> machine with 64 KeyIDs we max out 46-bit virtual address space dedicated
> for direct mapping with 1TiB of RAM. Given that we round up any
> calculation on direct mapping size to 1TiB, we effectively claim all
> 46-bit address space for direct mapping on such machine regardless of
> RAM size.
...

I was thinking more in terms of the exposure of keeping the plaintext
mapped all the time.

Imagine Meltdown if the decrypted page is not mapped into the kernel:
this feature could actually have protected user data.

But, with this scheme, it exposes the data... all the data... with all
possible keys... all the time. That's one heck of an attack surface.
Can we do better?

>>> struct page_ext_operations page_mktme_ops = {
>>> .need = need_page_mktme,
>>> };
>>> +
>>> +void __init setup_direct_mapping_size(void)
>>> +{
...
>>> +}
>>
>> Do you really need two copies of this function? Shouldn't it see
>> mktme_status!=MKTME_ENUMERATED and just jump out? How is the code
>> before that "goto out" different from the CONFIG_MKTME=n case?
>
> mktme.c is not compiled for CONFIG_MKTME=n.

I'd rather have one copy in shared code which is mosty optimized away
when CONFIG_MKTME=n than two copies.