[tip:irq/urgent] irqchip: armanda: Sanitize set_irq_affinity()

From: tip-bot for Thomas Gleixner
Date: Mon Apr 28 2014 - 15:35:27 EST


Commit-ID: 8cc3cfc5ccf1680b7c88f874912b6bec2797b76b
Gitweb: http://git.kernel.org/tip/8cc3cfc5ccf1680b7c88f874912b6bec2797b76b
Author: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
AuthorDate: Tue, 4 Mar 2014 20:43:41 +0000
Committer: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
CommitDate: Mon, 28 Apr 2014 21:27:15 +0200

irqchip: armanda: Sanitize set_irq_affinity()

The set_irq_affinity() function has two issues:

1) It has no protection against selecting an offline cpu from the
given mask.

2) It pointlessly restricts the affinity masks to have a single cpu
set. This collides with the irq migration code of arm.

irq affinity is set to core 3
core 3 goes offline

migration code sets mask to cpu_online_mask and calls the
irq_set_affinity() callback of the irq_chip which fails due to bit
0,1,2 set.

So instead of doing silly for_each_cpu() loops just pick any bit of
the mask which intersects with the online mask.

Get rid of fiddling with the default_irq_affinity as well.

[ Gregory: Fixed the access to the routing register ]

Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Acked-by: Gregory CLEMENT <gregory.clement@xxxxxxxxxxxxxxxxxx>
Tested-by: Gregory CLEMENT <gregory.clement@xxxxxxxxxxxxxxxxxx>
Cc: Jason Cooper <jason@xxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxx>
Link: http://lkml.kernel.org/r/20140304203101.088889302@xxxxxxxxxxxxx
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>

---
drivers/irqchip/irq-armada-370-xp.c | 37 ++++++-------------------------------
1 file changed, 6 insertions(+), 31 deletions(-)

diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c
index 41be897..304a20d 100644
--- a/drivers/irqchip/irq-armada-370-xp.c
+++ b/drivers/irqchip/irq-armada-370-xp.c
@@ -41,6 +41,7 @@
#define ARMADA_370_XP_INT_SET_ENABLE_OFFS (0x30)
#define ARMADA_370_XP_INT_CLEAR_ENABLE_OFFS (0x34)
#define ARMADA_370_XP_INT_SOURCE_CTL(irq) (0x100 + irq*4)
+#define ARMADA_370_XP_INT_SOURCE_CPU_MASK 0xF

#define ARMADA_370_XP_CPU_INTACK_OFFS (0x44)
#define ARMADA_375_PPI_CAUSE (0x10)
@@ -244,35 +245,18 @@ static DEFINE_RAW_SPINLOCK(irq_controller_lock);
static int armada_xp_set_affinity(struct irq_data *d,
const struct cpumask *mask_val, bool force)
{
- unsigned long reg;
- unsigned long new_mask = 0;
- unsigned long online_mask = 0;
- unsigned long count = 0;
irq_hw_number_t hwirq = irqd_to_hwirq(d);
+ unsigned long reg, mask;
int cpu;

- for_each_cpu(cpu, mask_val) {
- new_mask |= 1 << cpu_logical_map(cpu);
- count++;
- }
-
- /*
- * Forbid mutlicore interrupt affinity
- * This is required since the MPIC HW doesn't limit
- * several CPUs from acknowledging the same interrupt.
- */
- if (count > 1)
- return -EINVAL;
-
- for_each_cpu(cpu, cpu_online_mask)
- online_mask |= 1 << cpu_logical_map(cpu);
+ /* Select a single core from the affinity mask which is online */
+ cpu = cpumask_any_and(mask_val, cpu_online_mask);
+ mask = 1UL << cpu_logical_map(cpu);

raw_spin_lock(&irq_controller_lock);
-
reg = readl(main_int_base + ARMADA_370_XP_INT_SOURCE_CTL(hwirq));
- reg = (reg & (~online_mask)) | new_mask;
+ reg = (reg & (~ARMADA_370_XP_INT_SOURCE_CPU_MASK)) | mask;
writel(reg, main_int_base + ARMADA_370_XP_INT_SOURCE_CTL(hwirq));
-
raw_spin_unlock(&irq_controller_lock);

return 0;
@@ -494,15 +478,6 @@ static int __init armada_370_xp_mpic_of_init(struct device_node *node,

#ifdef CONFIG_SMP
armada_xp_mpic_smp_cpu_init();
-
- /*
- * Set the default affinity from all CPUs to the boot cpu.
- * This is required since the MPIC doesn't limit several CPUs
- * from acknowledging the same interrupt.
- */
- cpumask_clear(irq_default_affinity);
- cpumask_set_cpu(smp_processor_id(), irq_default_affinity);
-
#endif

armada_370_xp_msi_init(node, main_int_res.start);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/