Re: [RFC PATCH v9 for 4.15 01/14] Restartable sequences system call

From: Mathieu Desnoyers
Date: Wed Oct 18 2017 - 14:09:40 EST


----- On Oct 18, 2017, at 12:41 PM, Ben Maurer bmaurer@xxxxxx wrote:

>> The layout of struct rseq_cs is as follows:
>
>> start_ip
>> Instruction pointer address of the first instruction of the
>> sequence of consecutive assembly instructions.
>
>> post_commit_ip
>> Instruction pointer address after the last instruction of
>>Â the sequence of consecutive assembly instructions.
>
>>Â abort_ip
>> Instruction pointer address where to move the execution
>>Â flow in case of abort of the sequence of consecutive assemâ
>>Â bly instructions.
>
> Really minor performance performance thought here.
>
> 1) In the kernel at context switch time you'd need code like:
>
> if (ip >= start_ip && ip <= post_commit_ip)
>
> This branch would be hard to predict because most instruction pointers would be
> either before or after. If post_commit_ip were relative to start_ip you could
> do this:
>
> if (ip - start_ip <= post_commit_offset)
>
> which is a single branch that would be more predictable.

The actual context switch code only sets the "t->rseq_preempt"
flags and TIF_NOTIFY_RESUME. The user-accesses happen when
returning to user-space with TIF_NOTIFY_RESUME set.

Indeed, we can expect most context switch out of a registered
rseq thread to trigger one __rseq_handle_notify_resume on
return to user-space for that thread.

As you point out, the "common" case is *not* nested over a
critical section. This means t->rseq->rseq_cs is NULL.
This effectively means post_commit_ip and start_ip are NULL in
rseq_ip_fixup when compared to the current ip.

The check is currently implemented like this:

/* Handle potentially not being within a critical section. */
if ((void __user *)instruction_pointer(regs) >= post_commit_ip ||
(void __user *)instruction_pointer(regs) < start_ip)
return 1;

So if non-nested over c.s., the first branch is ip >= NULL, which turns
out to be true, and we therefore return 1 from rseq_ip_fixup.

I suspect that we'd need to cast those pointers to (unsigned long) to
be strictly C standard compliant.

If we instead use "post_commit_offset" relative to start_ip, a non-nested
common case would have start_ip = NULL, post_commit_offset = 0. The check
you propose for not being nested over a c.s. would look like:

if (!((long)ip - (long)start_ip <= (long)post_commit_offset))
return 1;

This introduces an issue here: if "ip" is lower than "start_ip", we
can incorrectly think we are in a critical section, when we are in
fact not.

With the previous approach proposed by Paul Turner, this was not an
issue, because he was setting the equivalent of the rseq_cs pointer
back to NULL at the end of the assembly fast-path. However, I have
a fast-path optimization that only sets the rseq_cs pointer at the
beginning of the fast-path, without clearing it afterward. It's up
to the following critical section to overwrite the rseq_cs pointer,
or to the kernel to set it back to NULL if it finds out that it is
preempting/delivering a signal over an instruction pointer outside
of the current rseq_cs start_ip/post_commit_ip range (lazy clear).

Moreover, this modification would add a subtraction on the common case
(ip - start_ip), and makes the ABI slightly uglier.

>
> 2) In a shared library a rseq_cs structure would have to be relocated at runtime
> because at compilation time the final address of the library wouldn't be known.
> I'm not sure if this is important enough to address, but it could be solved by
> making the pointers relative to the address of rseq_cs. But this would make for
> an uglier API.

If I understand well, you are proposing to speed up .so load time by
means of removing relocations of pointers within rseq_cs, done by
making those relative to the rseq_cs address.

So the downside here is extra arithmetic operations on resume to
userspace (__rseq_handle_notify_resume): the kernel would have
to calculate the offset of start_ip and post_commit_ip from the
address of rseq_cs. Sure, we're only talking about two additions
there, but I don't think marginally speeding up library load
justifies extra work in that kernel path, nor the uglier ABI.

Thoughts ?

Thanks,

Mathieu

--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com