On Thu, Jun 19, 2025 at 06:27:52PM +0200, Benjamin Gaignard wrote:
Le 19/06/2025 à 15:47, Jason Gunthorpe a écrit :This will or together GFP_ATOMIC and GFP_KERNEL, I don't think that
Ugh. This ignores the gfp flags that are passed into map because youI will add a gfp_t parameter and use it like that:
have to force atomic due to the spinlock that shouldn't be there :(
This means it does not set GFP_KERNEL_ACCOUNT when required. It would
be better to continue to use the passed in GFP flags but override them
to atomic mode.
page_table = iommu_alloc_pages_sz(gfp | GFP_ATOMIC | GFP_DMA32, SPAGE_SIZE);
works..
You'll loose invalidations..I will remove the switch to identity domain and it will works fine.+static int vsi_iommu_attach_device(struct iommu_domain *domain,Hurm, this is actually quite bad, now that it is clear the HW is in an
+ struct device *dev)
+{
+ struct vsi_iommu *iommu = dev_iommu_priv_get(dev);
+ struct vsi_iommu_domain *vsi_domain = to_vsi_domain(domain);
+ unsigned long flags;
+ int ret;
+
+ if (WARN_ON(!iommu))
+ return -ENODEV;
+
+ /* iommu already attached */
+ if (iommu->domain == domain)
+ return 0;
+
+ ret = vsi_iommu_identity_attach(&vsi_identity_domain, dev);
+ if (ret)
+ return ret;
identity mode it is actually a security problem for VFIO to switch the
translation to identity during attach_device. I'd really prefer new
drivers don't make this mistake.
It seems the main thing motivating this is the fact a linked list has
only a single iommu->node so you can't attach the iommu to both the
new/old domain and atomically update the page table base.
Is it possible for the HW to do a blocking behavior? That would be an
easy fix.. You should always be able to force this by allocating a
shared top page table level during probe time and making it entirely
empty while staying always in the paging mode. Maybe there is a less
expensive way.
Otherwise you probably have work more like the other drivers and
allocate a struct for each attachment so you can have the iommu
attached two domains during the switch over and never drop to an
identity mode.
Maybe the easiest thing is to squish vsi_iommu_enable() and reorganize
it so that the spinlock is held across the register programming and
then you can atomically under the lock change the registers and change
the linked list. The register write cannot fail so this is fine.
This should probably also flush the iotlb inside the lock.
That only one device probed to the iommu?Which will cause the core code to automatically run through towhich kind of validation ?
vsi_iommu_disable() prior to calling vsi_iommu_release_device(), which
will avoid UAF problems.
Also, should the probe functions be doing some kind of validation that
there is only one struct device attached?
Jason