Re: [PATCH v4 00/24] Introduce little endian bitops

From: Russell King
Date: Mon Jan 17 2011 - 04:34:42 EST


On Sun, Jan 16, 2011 at 06:50:39PM -0800, Linus Torvalds wrote:
> On Sun, Jan 16, 2011 at 6:37 PM, Akinobu Mita <akinobu.mita@xxxxxxxxx> wrote:
> >
> > Changing *_bit_le() to take "void *" instead of "unsigned long *"
> > makes this patch series acceptable?
>
> That would seem to at least make all the filesystem code happier, and
> they can continue to do just something like
>
> #define ext2_set_bit __set_bit_le
>
> (or whatever the exact sequence ends up being).
>
> > Or do we also need to change *_bit_le() to handle unaligned address
> > correctly?  (i.e. not only long aligned address but also byte aligned
> > address)
>
> No, I don't think that is required. We've never done it before, and
> we've never had the type-safety for the little-endian (aka "minix")
> bitops historically. So I'd just keep the status quo.

The ARM bitops (set_bit/clear_bit/change_bit) have always taken an
unsigned long argument and we have casts in our preprocessor macros
for them. Only a couple of the find_bit functions have taken a
void pointer.

I really don't want to have to change the function prototypes on ARM,
and I really don't want to hide this fact from non-fs users that ARM
bitops require such pointers, with the exception of what's required
for ext2/minix. If we do hide it, at some point we'll have someone
believing that ARM's wrong to be requiring stricter alignment.

As ARM bitops now (in my tree) require strict alignment to unsigned long.
A 32-bit load exclusive assembly instruction must never be misaligned,
there's no way to safely fix that kind thing up. (No, you can't reliably
use spinlocks and normal stores - what if another thread does an aligned
load exclusive/store exclusive, which won't trap?)

The reason for this change is to avoid the current situation where people
are building kernels which support multiple platforms - including SMP -
but because the instructions used for the SMP-safe bitops (byte load
exclusive) are not supported on some of the selected processors, we fall
back to the SMP-unsafe versions. However, using the word load exclusive
instructions are supported.

I've also added tests in the assembly to ensure pointer alignment (which,
as we're talking about filesystem data corruption if for some reason
these misbehave due to misaligned pointers is something I've done anyway).

If the generic bitops are going to be changed to void pointers, maybe the
solution to this is to document that bitops require 'unsigned long *'
alignment on some platforms?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/