Re: [PATCHv8 00/11] Linear Address Masking enabling

From: Jacob Pan
Date: Mon Sep 12 2022 - 20:08:03 EST


Hi Kirill,

On Tue, 13 Sep 2022 01:49:30 +0300, "Kirill A. Shutemov"
<kirill.shutemov@xxxxxxxxxxxxxxx> wrote:

> On Sun, Sep 04, 2022 at 03:39:52AM +0300, Kirill A. Shutemov wrote:
> > On Thu, Sep 01, 2022 at 05:45:08PM +0000, Ashok Raj wrote:
> > > Hi Kirill,
> > >
> > > On Tue, Aug 30, 2022 at 04:00:53AM +0300, Kirill A. Shutemov wrote:
> > > > Linear Address Masking[1] (LAM) modifies the checking that is
> > > > applied to 64-bit linear addresses, allowing software to use of the
> > > > untranslated address bits for metadata.
> > >
> > > We discussed this internally, but didn't bubble up here.
> > >
> > > Given that we are working on enabling Shared Virtual Addressing (SVA)
> > > within the IOMMU. This permits user to share VA directly with the
> > > device, and the device can participate even in fixing page-faults and
> > > such.
> > >
> > > IOMMU enforces canonical addressing, since we are hijacking the top
> > > order bits for meta-data, it will fail sanity check and we would
> > > return a failure back to device on any page-faults from device.
> > >
> > > It also complicates how device TLB and ATS work, and needs some major
> > > improvements to detect device capability to accept tagged pointers,
> > > adjust the devtlb to act accordingly.
> > >
> > >
> > > Both are orthogonal features, but there is an intersection of both
> > > that are fundamentally incompatible.
> > >
> > > Its even more important, since an application might be using SVA
> > > under the cover provided by some library that's used without their
> > > knowledge.
> > >
> > > The path would be:
> > >
> > > 1. Ensure both LAM and SVM are incompatible by design, without major
> > > changes.
> > > - If LAM is enabled already and later SVM enabling is
> > > requested by user, that should fail. and Vice versa.
> > > - Provide an API to user to ask for opt-out. Now they know
> > > they must sanitize the pointers before sending to device, or the
> > > working set is already isolated and needs no work.
> >
> > The patch below implements something like this. It is PoC, build-tested
> > only.
> >
> > To be honest, I hate it. It is clearly a layering violation. It feels
> > dirty. But I don't see any better way as we tie orthogonal features
> > together.
> >
> > Also I have no idea how to make forced PASID allocation if LAM enabled.
> > What the API has to look like?
>
> Jacob, Ashok, any comment on this part?
>
> I expect in many cases LAM will be enabled very early (like before malloc
> is functinal) in process start and it makes PASID allocation always fail.
>
Is there a generic flag LAM can set on the mm?

We can't check x86 feature in IOMMU SVA API. i.e.

@@ -32,6 +33,15 @@ int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t max)
return -EINVAL;

mutex_lock(&iommu_sva_lock);
+
+ /* Serialize against LAM enabling */
+ mutex_lock(&mm->context.lock);
+
+ if (mm_lam_cr3_mask(mm)) {
+ ret = -EBUSY;
+ goto out;
+ }
+

> Any way out?
>


Thanks,

Jacob