Re: [PATCH v15] x86/split_lock: Enable split lock detection by kernel

From: Luck, Tony
Date: Sat Jan 25 2020 - 14:55:21 EST


On Sat, Jan 25, 2020 at 11:44:19AM +0100, Borislav Petkov wrote:

Boris,

Thanks for the review. All comments accepted and changes made, except as
listed below. Also will fix up some other checkpatch fluff.

-Tony


> > +#define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* #AC for split lock */
>
> Do you really want to have "split_lock_detect" in /proc/cpuinfo or
> rather somethign shorter?

I don't have a good abbreviation. It would become the joint 2nd longest
flag name ... top ten lengths look like this on my test machine. So while
long, not unprecedented.

18 tsc_deadline_timer
17 split_lock_detect
17 arch_capabilities
16 avx512_vpopcntdq
14 tsc_known_freq
14 invpcid_single
14 hwp_act_window
13 ibrs_enhanced
13 cqm_occup_llc
13 cqm_mbm_total
13 cqm_mbm_local
13 avx512_bitalg
13 3dnowprefetch


> > +static inline bool match_option(const char *arg, int arglen, const char *opt)
> > +{
> > + int len = strlen(opt);
> > +
> > + return len == arglen && !strncmp(arg, opt, len);
> > +}
>
> There's the same function in arch/x86/kernel/cpu/bugs.c. Why are you
> duplicating it here?
>
> Yeah, this whole chunk looks like it has been "influenced" by the sec
> mitigations in bugs.c :-)

Blame PeterZ for that. For now I'd like to add the duplicate inline function
and then clean up by putting it into some header file (and maybe hunting down
other places where it could be used).

> > + /*
> > + * If this is anything other than the boot-cpu, you've done
> > + * funny things and you get to keep whatever pieces.
> > + */
> > + pr_warn("MSR fail -- disabled\n");
>
> What's that for? Guests?

Also some PeterZ code. As the comment implies we really shouldn't be able
to get here. This whole function should only be called on CPU models that
support the MSR ... but PeterZ is defending against the situation that sometimes
there are special SKUs with the same model number (since we may be here because
of an x86_match_cpu() hit, rather than the architectural enumeration check).

> > +void switch_to_sld(struct task_struct *prev, struct task_struct *next)
>
> This will get called on other vendors but let's just assume, for
> simplicity's sake, TIF_SLD won't be set there so it is only a couple of
> insns on a task switch going to waste.

Thomas explained how to fix it so we only call the function if TIF_SLD
is set in either the previous or next process (but not both). So the
overhead is just extra XOR/AND in the caller.

-Tony