Re: [WIP 0/3] Memory model and atomic API in Rust

From: Boqun Feng
Date: Fri Mar 22 2024 - 22:57:42 EST


On Fri, Mar 22, 2024 at 10:33:13PM -0400, Kent Overstreet wrote:
> On Fri, Mar 22, 2024 at 07:26:28PM -0700, Boqun Feng wrote:
> > On Fri, Mar 22, 2024 at 10:07:31PM -0400, Kent Overstreet wrote:
> > [...]
> > > > Boqun already mentioned the "mixing access sizes", which is actually
> > > > quite fundamental in the kernel, where we play lots of games with that
> > > > (typically around locking, where you find patterns line unlock writing
> > > > a zero to a single byte, even though the whole lock data structure is
> > > > a word). And sometimes the access size games are very explicit (eg
> > > > lib/lockref.c).
> > >
> > > I don't think mixing access sizes should be a real barrier. On the read
> >
> > Well, it actually is, since mixing access sizes is, guess what,
> > an undefined behavior:
> >
> > (example in https://doc.rust-lang.org/std/sync/atomic/#memory-model-for-atomic-accesses)
> >
> > thread::scope(|s| {
> > // This is UB: using different-sized atomic accesses to the same data
> > s.spawn(|| atomic.store(1, Ordering::Relaxed));
> > s.spawn(|| unsafe {
> > let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
> > differently_sized.store(2, Ordering::Relaxed);
> > });
> > });
> >
> > Of course, you can say "I will just ignore the UB", but if you have to
> > ignore "compiler rules" to make your code work, why bother use compiler
> > builtin in the first place? Being UB means they are NOT guaranteed to
> > work.
>
> That's not what I'm proposing - you'd need additional compiler support.

Ah, OK.

> but the new intrinsic would be no different, semantics wise for the
> compiler to model, than a "lock orb".

Be ready to be disappointed:

https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/is.20atomic.20aliasing.20allowed.3F/near/402078545
https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/is.20atomic.20aliasing.20allowed.3F/near/402082631

;-)

In fact, if you get a chance to read the previous discussion links I
shared, you will find I was just like you in the beginning: hope we
could extend the model to support more kernel code properly. But my
overall feeling is that it's either very challenging or lack of
motivation to do.

Regards,
Boqun