Re: [PATCH v3 3/5] iommu: Add verisilicon IOMMU driver

From: Benjamin Gaignard
Date: Fri Jun 20 2025 - 04:58:31 EST



Le 19/06/2025 à 18:59, Jason Gunthorpe a écrit :
On Thu, Jun 19, 2025 at 06:27:52PM +0200, Benjamin Gaignard wrote:
Le 19/06/2025 à 15:47, Jason Gunthorpe a écrit :
Ugh. This ignores the gfp flags that are passed into map because you
have to force atomic due to the spinlock that shouldn't be there :(
This means it does not set GFP_KERNEL_ACCOUNT when required. It would
be better to continue to use the passed in GFP flags but override them
to atomic mode.
I will add a gfp_t parameter and use it like that:
page_table = iommu_alloc_pages_sz(gfp | GFP_ATOMIC | GFP_DMA32, SPAGE_SIZE);
This will or together GFP_ATOMIC and GFP_KERNEL, I don't think that
works..

I have test it, I don't see conflicts or errors. What worries you ?


+static int vsi_iommu_attach_device(struct iommu_domain *domain,
+ struct device *dev)
+{
+ struct vsi_iommu *iommu = dev_iommu_priv_get(dev);
+ struct vsi_iommu_domain *vsi_domain = to_vsi_domain(domain);
+ unsigned long flags;
+ int ret;
+
+ if (WARN_ON(!iommu))
+ return -ENODEV;
+
+ /* iommu already attached */
+ if (iommu->domain == domain)
+ return 0;
+
+ ret = vsi_iommu_identity_attach(&vsi_identity_domain, dev);
+ if (ret)
+ return ret;
Hurm, this is actually quite bad, now that it is clear the HW is in an
identity mode it is actually a security problem for VFIO to switch the
translation to identity during attach_device. I'd really prefer new
drivers don't make this mistake.

It seems the main thing motivating this is the fact a linked list has
only a single iommu->node so you can't attach the iommu to both the
new/old domain and atomically update the page table base.

Is it possible for the HW to do a blocking behavior? That would be an
easy fix.. You should always be able to force this by allocating a
shared top page table level during probe time and making it entirely
empty while staying always in the paging mode. Maybe there is a less
expensive way.

Otherwise you probably have work more like the other drivers and
allocate a struct for each attachment so you can have the iommu
attached two domains during the switch over and never drop to an
identity mode.
I will remove the switch to identity domain and it will works fine.
You'll loose invalidations..

Maybe the easiest thing is to squish vsi_iommu_enable() and reorganize
it so that the spinlock is held across the register programming and
then you can atomically under the lock change the registers and change
the linked list. The register write cannot fail so this is fine.

This should probably also flush the iotlb inside the lock.

I will try to summarize:
in vsi_iommu_attach_device() I should:
- take the lock
- do nothing if the domain is the same.
- if iommu is used (ie powered up):
- update the registers to enable the iommu
- flush
- update the link list
- update iommu->domain
- release the lock

in vsi_iommu_identity_attach() I should:
- take the lock
- do nothing if the domain is the same.
- if iommu is used (ie powered up):
- disable the iommu
- remove the node from the list
- update iommu->domain
- release the lock

vsi_iommu_suspend() and vsi_iommu_resume() will also have to take the lock
before calling vsi_iommu_disable() and vsi_iommu_enable().

sound good for you ?

Do I have to switch to identity domain in vsi_iommu_attach_device()
before applying the requested domain ?

Regards,
Benjamin


Which will cause the core code to automatically run through to
vsi_iommu_disable() prior to calling vsi_iommu_release_device(), which
will avoid UAF problems.

Also, should the probe functions be doing some kind of validation that
there is only one struct device attached?
which kind of validation ?
That only one device probed to the iommu?

Jason