Re: [PATCH V2] x86/tboot: add an option to disable iommu force on

From: Shaohua Li
Date: Thu Apr 27 2017 - 11:41:58 EST


On Thu, Apr 27, 2017 at 05:18:55PM +0200, Joerg Roedel wrote:
> On Thu, Apr 27, 2017 at 07:49:02AM -0700, Shaohua Li wrote:
> > This is exactly the usage for us. And please note, not everybody should
> > sacrifice the DMA security. It is only required when the pcie device hits iommu
> > hardware limitation. In our enviroment, normal network workloads (as high as
> > 60k pps) are completely ok with iommu enabled. Only the XDP workload, which can
> > do around 200k pps, is suffering from the problem. So completely forcing iommu
> > off for some workloads without the performance issue isn't good because of the
> > DMA security.
>
> How big are the packets in your XDP workload? I also run pps tests for
> performance measurement on older desktop-class hardware
> (Xeon E5-1620 v2 and AMD FX 6100) and 10GBit network
> hardware, and easily get over the 200k pps mark with IOMMU enabled. The
> Intel system can receive >900k pps and the AMD system is still at
> ~240k pps.
>
> But my tests only send IPv4/UDP packets with 8bytes of payload, so that
> is probably different to your setup.

Sorry, I wrote the wrong data. With iommu the pps is 6M pps, and without it, we
can get around 20M pps. XDP is much faster than normal network workloads. The
test uses 64 bytes. We tried other sizes in the machine (not 8 bytes though),
but pps doesn't change significantly. With different package size, the peek pps
is around 7M with iommu, then the NIC starts to drop package. CPU util is very
low as I said before. Without iommu, the peek pps is around 22M.

Thanks,
Shaohua