Re: [PATCH] Fixed division by zero bug in kernel/padata.c

From: Dan Kruchinin
Date: Fri Jul 02 2010 - 09:24:30 EST


No problem. Here is fixed patch:
--
When boot CPU(typically CPU #0) is excluded from padata cpumask and
user enters halt command from console, kernel faults on division by zero;
This occurs because during the halt kernel shuts down each non-boot CPU one
by one. After it shuts down the last CPU that is set in the padata cpumask,
the only working CPU in the system is a boot CPU(#0) and it's the only CPU that
is set in the cpu_active_mask. Hence when padata_cpu_callback calls
__padata_remove_cpu(and hence padata_alloc_pd) it appears that padata
cpumask and
cpu_active mask aren't intersect. Hence the following code in
padata_alloc_pd causes
a DZ error exception:
cpumask_and(pd->cpumask, cpumask, cpu_active_mask); // pd->cpumask
will be empty
...
num_cpus = cpumask_weight(pd->cpumask); // num_cpus = 0
pd->max_seq_nr = (MAX_SEQ_NR / num_cpus) * num_cpus - 1; // DZ!


Signed-off-by: Dan Kruchinin <dkruchinin@xxxxxxx>
---
kernel/padata.c | 5 +++++
1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/kernel/padata.c b/kernel/padata.c
index fdd8ae6..dcddac0 100644
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -435,6 +435,9 @@ static struct parallel_data
*padata_alloc_pd(struct padata_instance *pinst,
}

num_cpus = cpumask_weight(pd->cpumask);
+ if (!num_cpus)
+ goto err_free_cpumask;
+
pd->max_seq_nr = (MAX_SEQ_NR / num_cpus) * num_cpus - 1;

setup_timer(&pd->timer, padata_reorder_timer, (unsigned long)pd);
@@ -446,6 +449,8 @@ static struct parallel_data
*padata_alloc_pd(struct padata_instance *pinst,

return pd;

+err_free_cpumask:
+ free_cpumask_var(pd->cpumask);
err_free_queue:
free_percpu(pd->queue);
err_free_pd:
--
1.7.1


On Fri, Jul 2, 2010 at 4:56 PM, Steffen Klassert
<steffen.klassert@xxxxxxxxxxx> wrote:
> On Fri, Jul 02, 2010 at 03:59:54PM +0400, Dan Kruchinin wrote:
>> ÂWhen boot CPU(typically CPU #0) is excluded from padata cpumask and
>> Âuser enters halt command from console, kernel faults on division by zero;
>> ÂThis occurs because during the halt kernel shuts down each non-boot CPU one
>> Âby one and after it shuts down the last CPU that is set in the padata cpumask,
>> Âthe only working CPU in the system is a boot CPU(#0) and it's the only CPU that
>> Âis set in the cpu_active_mask. Hence when padata_cpu_callback calls
>> Â__padata_remove_cpu(which calls padata_alloc_pd) it appears that
>> padata cpumask and
>> Âcpu_active_mask aren't intersect. Hence the following code in
>> padata_alloc_pd causes
>> Âa DZ error exception:
>> Â cpumask_and(pd->cpumask, cpumask, cpu_active_mask); // pd->cpumask
>> will be empty
>> Â ...
>> Â num_cpus = cpumask_weight(pd->cpumask); // num_cpus = 0
>> Â pd->max_seq_nr = (MAX_SEQ_NR / num_cpus) * num_cpus - 1; // DZ!
>>
>
> Good catch!
>
>>
>> Signed-off-by: Dan Kruchinin <dkruchinin@xxxxxxx>
>> ---
>> Âkernel/padata.c | Â Â2 +-
>> Â1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/kernel/padata.c b/kernel/padata.c
>> index fdd8ae6..dbe6d26 100644
>> --- a/kernel/padata.c
>> +++ b/kernel/padata.c
>> @@ -434,7 +434,7 @@ static struct parallel_data
>> *padata_alloc_pd(struct padata_instance *pinst,
>> Â Â Â Â Â Â Â atomic_set(&queue->num_obj, 0);
>> Â Â Â }
>>
>> - Â Â num_cpus = cpumask_weight(pd->cpumask);
>> + Â Â num_cpus = cpumask_weight(pd->cpumask) + 1;
>> Â Â Â pd->max_seq_nr = (MAX_SEQ_NR / num_cpus) * num_cpus - 1;
>>
>
> num_cpus should stay the number of cpus in this cpumask, this is required
> to handle a smooth overrun of the sequence numbers.
> I think it's better to return with an error and to stop the instance
> if somebody takes away the last cpu in our cpumask. We can't run with an
> empty cpumask anyway.
>
> Let us look again at this on monday.
>
> Thanks again for catching this,
>
> Steffen
>



--
W.B.R.
Dan Kruchinin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/