Re: [PATCHv8 00/11] Linear Address Masking enabling

From: Ashok Raj
Date: Fri Sep 09 2022 - 12:08:06 EST


On Sun, Sep 04, 2022 at 03:39:52AM +0300, Kirill A. Shutemov wrote:
> On Thu, Sep 01, 2022 at 05:45:08PM +0000, Ashok Raj wrote:
> > Hi Kirill,
> >
> > On Tue, Aug 30, 2022 at 04:00:53AM +0300, Kirill A. Shutemov wrote:
> > > Linear Address Masking[1] (LAM) modifies the checking that is applied to
> > > 64-bit linear addresses, allowing software to use of the untranslated
> > > address bits for metadata.
> >
> > We discussed this internally, but didn't bubble up here.
> >
> > Given that we are working on enabling Shared Virtual Addressing (SVA)
> > within the IOMMU. This permits user to share VA directly with the device,
> > and the device can participate even in fixing page-faults and such.
> >
> > IOMMU enforces canonical addressing, since we are hijacking the top order
> > bits for meta-data, it will fail sanity check and we would return a failure
> > back to device on any page-faults from device.
> >
> > It also complicates how device TLB and ATS work, and needs some major
> > improvements to detect device capability to accept tagged pointers, adjust
> > the devtlb to act accordingly.
> >
> >
> > Both are orthogonal features, but there is an intersection of both
> > that are fundamentally incompatible.
> >
> > Its even more important, since an application might be using SVA under the
> > cover provided by some library that's used without their knowledge.
> >
> > The path would be:
> >
> > 1. Ensure both LAM and SVM are incompatible by design, without major
> > changes.
> > - If LAM is enabled already and later SVM enabling is requested by
> > user, that should fail. and Vice versa.
> > - Provide an API to user to ask for opt-out. Now they know they
> > must sanitize the pointers before sending to device, or the
> > working set is already isolated and needs no work.
>
> The patch below implements something like this. It is PoC, build-tested only.
>
> To be honest, I hate it. It is clearly a layering violation. It feels
> dirty. But I don't see any better way as we tie orthogonal features
> together.
>
> Also I have no idea how to make forced PASID allocation if LAM enabled.
> What the API has to look like?
>
> Any comments?

Looking through it, it seems to be sane enough.. I feel dirty too :-) but
don't see a better way.

I'm Ccing JasonG since we are reworking the IOMMU interfaces right now, and
Jacob who is in the middle of some refactoring.

>
> > 2. I suppose for any syscalls that take tagged pointers you would maybe
> > relax checks for how many bits to ignore for canonicallity. This is
> > required so user don't need to do the same for everything sanitization
> > before every syscall.
>
> I'm not quite follow this. For syscalls that allow tagged pointers, we do
> untagged_addr() now. Not sure what else needed.
>
> > If you have it fail, the library might choose a less optimal path if one is
> > available.
> >
> > Cheers,
> > Ashok
>
> diff --git a/arch/x86/include/uapi/asm/prctl.h b/arch/x86/include/uapi/asm/prctl.h
> index a31e27b95b19..e5c04ced36c9 100644
> --- a/arch/x86/include/uapi/asm/prctl.h
> +++ b/arch/x86/include/uapi/asm/prctl.h
> @@ -23,5 +23,6 @@
> #define ARCH_GET_UNTAG_MASK 0x4001
> #define ARCH_ENABLE_TAGGED_ADDR 0x4002
> #define ARCH_GET_MAX_TAG_BITS 0x4003
> +#define ARCH_ENABLE_TAGGED_ADDR_FORCED 0x4004
>
> #endif /* _ASM_X86_PRCTL_H */
> diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
> index 337f80a0862f..7d89a2fd1a55 100644
> --- a/arch/x86/kernel/process_64.c
> +++ b/arch/x86/kernel/process_64.c
> @@ -774,7 +774,8 @@ static bool lam_u48_allowed(void)
> #define LAM_U48_BITS 15
> #define LAM_U57_BITS 6
>
> -static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits)
> +static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits,
> + bool forced)
> {
> int ret = 0;
>
> @@ -793,6 +794,11 @@ static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits)
> goto out;
> }
>
> + if (pasid_valid(mm->pasid) && !forced) {
> + ret = -EBUSY;
> + goto out;
> + }
> +
> if (!nr_bits) {
> ret = -EINVAL;
> goto out;
> @@ -910,7 +916,9 @@ long do_arch_prctl_64(struct task_struct *task, int option, unsigned long arg2)
> return put_user(task->mm->context.untag_mask,
> (unsigned long __user *)arg2);
> case ARCH_ENABLE_TAGGED_ADDR:
> - return prctl_enable_tagged_addr(task->mm, arg2);
> + return prctl_enable_tagged_addr(task->mm, arg2, false);
> + case ARCH_ENABLE_TAGGED_ADDR_FORCED:
> + return prctl_enable_tagged_addr(task->mm, arg2, true);
> case ARCH_GET_MAX_TAG_BITS: {
> int nr_bits;
>
> diff --git a/drivers/iommu/iommu-sva-lib.c b/drivers/iommu/iommu-sva-lib.c
> index 106506143896..a6ec17de1937 100644
> --- a/drivers/iommu/iommu-sva-lib.c
> +++ b/drivers/iommu/iommu-sva-lib.c
> @@ -4,6 +4,7 @@
> */
> #include <linux/mutex.h>
> #include <linux/sched/mm.h>
> +#include <asm/mmu_context.h>
>
> #include "iommu-sva-lib.h"
>
> @@ -32,6 +33,15 @@ int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t max)
> return -EINVAL;
>
> mutex_lock(&iommu_sva_lock);
> +
> + /* Serialize against LAM enabling */
> + mutex_lock(&mm->context.lock);
> +
> + if (mm_lam_cr3_mask(mm)) {
> + ret = -EBUSY;
> + goto out;
> + }
> +
> /* Is a PASID already associated with this mm? */
> if (pasid_valid(mm->pasid)) {
> if (mm->pasid < min || mm->pasid >= max)
> @@ -45,6 +55,7 @@ int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t max)
> else
> mm_pasid_set(mm, pasid);
> out:
> + mutex_unlock(&mm->context.lock);
> mutex_unlock(&iommu_sva_lock);
> return ret;
> }
> --
> Kiryl Shutsemau / Kirill A. Shutemov