+ *
+ * This computes explicit_access && (rflags & X86_EFLAGS_AC), leaving
Too many &&, the logic below is a bitwise &, not a logical &&.
* the result in X86_EFLAGS_AC. We then insert it in place of
* the PFERR_RSVD_MASK bit; this bit will always be zero in pfec,
* but it will be one in index if SMAP checks are being overridden.
* It is important to keep this branchless.
Heh, so important that it incurs multiple branches and possible VMREADs in
vmx_get_cpl() and vmx_get_rflags(). And before static_call, multiple retpolines
to boot. Probably a net win now as only the first permission_fault() check for
a given VM-Exit be penalized, but the comment is amusing nonetheless.
*/
- unsigned long not_smap = (cpl - 3) & (rflags & X86_EFLAGS_AC);
+ u32 not_smap = (rflags & X86_EFLAGS_AC) & vcpu->arch.explicit_access;
I really, really dislike shoving this into vcpu->arch. I'd much prefer to make
this a property of the access, even if that means adding another param or doing
something gross with @access (@pfec here).
int index = (pfec >> 1) +
(not_smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1));
bool fault = (mmu->permissions[index] >> pte_access) & 1;