Re: [GIT PULL] percpu fixes for 2.6.32-rc6

From: Linus Torvalds
Date: Tue Nov 10 2009 - 12:11:49 EST




On Tue, 10 Nov 2009, Tejun Heo wrote:
>
> Please pull from the following percpu fix branch.

No way in hell.

> It fixes a possible deadlock caused by lock ordering inversion through
> irq.

.. and it does so by introducing a new bug. No thank you.

> +
> + /*
> + * pcpu_mem_free() might end up calling vfree() which uses
> + * IRQ-unsafe lock and thus can't be called with pcpu_lock
> + * held. Release and reacquire pcpu_lock if old map needs to
> + * be freed.
> + */
> + if (old) {
> + spin_unlock_irqrestore(&pcpu_lock, *flags);
> + pcpu_mem_free(old, size);
> + spin_lock_irqsave(&pcpu_lock, *flags);
> + }

Routines that drop and then re-take the lock should be banned, as it's
almost always a bug waiting to happen. As it is this time:

> return 0;

Now the caller will happily continue to traverse a list that may no longer
be valid, because you dropped the lock.

Really. This thing is total sh*t. It was misdesigned to start with, and
the calling convention is wrong. That 'pcpu_extend_area_map()' function
should be split up into two functions: 'pcpu_needs_to_extend()' that never
drops the lock, and 'pcpu_extend_area()' that _always_ drops the lock
(and then returns an error if it can't allocate the memory).

Not that shit-for-brains that may or may not drop the lock, and then
returns an incorrect error return depending on whether it did.

In other words: fix the sh*t, don't add even more to it. That 'return 0'
was and is wrong. It should have been a 'return 1'. And thank the Gods
that I looked at it,

Sure, you can fix the bug by just returning 1. But you can't fix the total
crap of a calling convention that way. Fix it properly as outlined above,
and remember: functions that drop locks that were held when called are
EVIL and almost always the source of really subtle races.

As it was in this case.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/