Re: [PATCH v3 4/5] arch/x86: Optionally flush L1D on context switch

From: Thomas Gleixner
Date: Sat Apr 18 2020 - 06:18:04 EST


"Singh, Balbir" <sblbir@xxxxxxxxxx> writes:
> On Fri, 2020-04-17 at 16:41 +0200, Thomas Gleixner wrote:
>> Balbir Singh <sblbir@xxxxxxxxxx> writes:
>> static void *l1d_flush_pages;
>> static DEFINE_MUTEX(l1d_flush_mutex);
>>
>> int l1d_flush_init(void)
>> {
>> int ret;
>>
>> if (static_cpu_has(X86_FEATURE_FLUSH_L1D) || l1d_flush_pages)
>> return 0;
>>
>> mutex_lock(&l1d_flush_mutex);
>> if (!l1d_flush_pages)
>> l1d_flush_pages = l1d_flush_alloc_pages();
>> ret = l1d_flush_pages ? 0 : -ENOMEM;
>> mutex_unlock(&l1d_flush_mutex);
>> return ret;
>> }
>> EXPORT_SYMBOL_GPL(l1d_flush_init);
>>
>> which removes the export of l1d_flush_alloc_pages() and gets rid of the
>> cleanup counterpart. In a real world deployment unloading of VMX if used
>> once is unlikely and with the task based one you end up with these pages
>> 'leaked' anyway if used once.
>>
> I don't want the patches to be enforce that one cannot unload the kvm module,
> but I can refactor those bits a bit more

Not freeing the l1d flush pages does not prevent unloading the kvm
module. It just keeps the around. It's the same problem with your L1D
flush for tasks. If one tasks uses it then the pages stay around until
the system reboots.

>> If any other architecture enables this, then it will have _ALL_ of this
>> code duplicated. So we should rather have:
>
> But that is being a bit prescriptive to arch's to implement their L1D flushing
> using TIF flags, arch's should be free to use bits in struct_mm for their arch
> if they feel so.

>> - All architectures have to use TIF_SPEC_FLUSH_L1D if they want to
>> support the prctl.
>>
>
> That is a concern (see above), should we enforce this?

Fair enough, but it's trivial enough to have:

static inline void arch_task_l1d_flush_update(bool enable)
static inline bool arch_task_l1d_flush_state(void)

and the rest of the logic is just identical.

Thanks,

tglx