Re: [tip:x86/apic] x86/x2apic/cluster: Use all the members of onecluster specified in the smp_affinity mask for the interrupt destination

From: Yinghai Lu
Date: Wed Jun 06 2012 - 18:21:44 EST


On Wed, Jun 6, 2012 at 8:04 AM, tip-bot for Suresh Siddha
<suresh.b.siddha@xxxxxxxxx> wrote:
> Commit-ID:  0b8255e660a0c229ebfe8f9fde12a8d4d34c50e0
> Gitweb:     http://git.kernel.org/tip/0b8255e660a0c229ebfe8f9fde12a8d4d34c50e0
> Author:     Suresh Siddha <suresh.b.siddha@xxxxxxxxx>
> AuthorDate: Mon, 21 May 2012 16:58:02 -0700
> Committer:  Ingo Molnar <mingo@xxxxxxxxxx>
> CommitDate: Wed, 6 Jun 2012 09:51:22 +0200
>
> x86/x2apic/cluster: Use all the members of one cluster specified in the smp_affinity mask for the interrupt destination
>
> If the HW implements round-robin interrupt delivery, this
> enables multiple cpu's (which are part of the user specified
> interrupt smp_affinity mask and belong to the same x2apic
> cluster) to service the interrupt.
>
> Also if the platform supports Power Aware Interrupt Routing,
> then this enables the interrupt to be routed to an idle cpu or a
> busy cpu depending on the perf/power bias tunable.
>
> We are now grouping all the cpu's in a cluster to one vector
> domain. So that will limit the total number of interrupt sources
> handled by Linux. Previously we support "cpu-count *
> available-vectors-per-cpu" interrupt sources but this will now
> reduce to "cpu-count/16 * available-vectors-per-cpu".

with this patch, will waste/hide several irq vector on some cpus.

for example:
very beginning after boot, one irq will have vector from 16 cpus of
same cluster.
later if user using irq affinity change to one cpu only.
cfg->domain will be not changed, but ioapic or irte will be confined to one cpu.
because mask is used for new dest_id.
*dest_id = apic->cpu_mask_to_apicid_and(mask, cfg->domain);

all other 15 cpu vector are wasted or hided.

Yinghai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/