On Mon, 2013-02-04 at 11:49 +0800, Mike Qiu wrote:Yes, exactly.Ah OK, sorry, that was more or less clear from your mail but I justOn Tue, 2013-01-15 at 15:38 +0800, Mike Qiu wrote:Hi MichaelCurrently, multiple MSI feature hasn't been enabled in pSeries,Hi Mike,
These patches try to enbale this feature.
These patches have been tested by using ipr driver, and the driver patchSo who wrote these patches? Normally we would expect the original author
has been made by Wen Xiong <wenxiong@xxxxxxxxxxxxxxxxxx>:
to post the patches if at all possible.
These Multiple MSI patches were wrote by myself, you know this feature
has not enabled
and it need device driver to test whether it works suitable. So I test
my patches use
Wen Xiong's ipr patches, which has been send out to the maillinglist.
I'm the original author :)
misunderstood.
OK, you mean this series?Yep, but the driver patches were wrote by Wen Xiong and has been send[PATCH 0/7] Add support for new IBM SAS controllersI would like to see the full series, including the driver enablement.
out.
http://thread.gmane.org/gmane.linux.scsi/79639
Yes, I just modify the driver to support mutiple MSI.
I just use her patches to test my patches. all device support MultipleYou mean drivers/net/ethernet/broadcom/tg3.c ? I don't see where it
MSI can use my feature not only IBM SAS controllers, I also test my
patches use the broadcom wireless card tg3, and also works OK.
calls pci_enable_msi_block() ?
Not all devices, just the device which support the multiple MSI by hardware,
All devices /can/ use it, but the driver needs to be updated. Currently
we have two drivers that do so (in Linus' tree), plus the updated IPR.
Yes, but the multi-MSI must need the hardware support, it is one extend for MSI,
Yeah that would be good.Yes, the system just has suport two MSIs. Anyway, I will try to doTest platform: One partition of pSeries with one cpu core(4 SMTs) andThis shows that you are correctly configuring two MSIs.
RAID bus controller: IBM PCI-E IPR SAS Adapter (ASIC) in POWER7
OS version: SUSE Linux Enterprise Server 11 SP2 (ppc64) with 3.8-rc3 kernel
IRQ 21 and 22 are assigned to the ipr device which support 2 mutiple MSI.
The test results is shown by 'cat /proc/interrups':
CPU0 CPU1 CPU2 CPU3
21: 6 5 5 5 XICS Level host1-0
22: 817 814 816 813 XICS Level host1-1
But the key advantage of using multiple interrupts is to distribute load
across CPUs and improve performance. So I would like to see some
performance numbers that show that there is a real benefit for all the
extra complexity in the code.
some proformance test, to show the real benefit.
But actually it needs the driver to do so. As the data show above, it
seems there is some problems in use the interrupt, the irq 21 use few,
most use 22, I will discuss with the driver author to see why and if
she fixed, I will give out the proformance result.
I really dislike that we have a separate API for multi-MSI vs MSI-X, and
pci_enable_msi_block() also pushes the contiguous power-of-2 allocation
into the irq domain layer, which is unpleasant. So if we really must do
multi-MSI I would like to do it differently.
cheers