Re: [PATCH 4/5] arm64: mm: Support ASID isolation feature

From: Yunfeng Ye
Date: Tue Nov 29 2022 - 07:27:05 EST




On 2022/11/29 1:00, Catalin Marinas wrote:
> On Thu, Nov 10, 2022 at 03:07:53PM +0800, Yunfeng Ye wrote:
>> On 2022/11/9 20:43, Catalin Marinas wrote:
>>> On Mon, Oct 17, 2022 at 04:32:02PM +0800, Yunfeng Ye wrote:
>>>> After a rollover, the global generation will be flushed, which will
>>>> cause the process mm->context.id on all CPUs do not match the
>>>> generation. Thus, the process will compete for the global spinlock lock
>>>> to reallocate a new ASID and refresh the TLBs of all CPUs on context
>>>> switch. This will lead to the increase of scheduling delay and TLB miss.
>>>>
>>>> In some delay-sensitive scenarios, for example, part of CPUs are
>>>> isolated, only a limited number of processes are deployed to run on the
>>>> isolated CPUs. In this case, we do not want these key processes to be
>>>> affected by the rollover of ASID.
>>>
>>> Part of this commit log should also go in the cover letter and it would> help to back this up by some numbers, e.g. what percentage improvement
>>> you get with this patchset by running hackbench on an isolated CPU.
>>>
>>> In theory it looks like CPU isolation would benefit from this patchset
>>> but we try not to touch this code often, so any modification should come
>>> with proper justification, backed by numbers.
>>>
>> Yes, CPU isolation will benefit from this patchset. We use cyclictest tool
>> to test the maximum scheduling and interrupt delays, found that the
>> sched_switch process takes several microseconds sometimes, The analysis
>> result shows that the delay is caused by the ASID refresh.
>
> Do you know whether it's predominantly the spinlock or the TLBI that's
> causing this (or just a combination of the two)?
>
I think the spinlock is the main factor, I didn't distinguish how much
time it took for each of the two. On the other hand, the TLBI is processed
under the spinlock currently, its time-consuming will increase the
time-consuming of the spinlock too.

> I was talking to Will and concluded we should try to reuse the ASID
> pinning code that's already in that file rather than adding a new
> bitmap. At a high level, a thread migrating to an isolated CPU can have
At first, I want to reuse the ASID pinned bitmap too, which is the same
idea with you. but there is a difference between pinned bitmap and isolation
bitmap, the pinned bitmap will not be changed when the generation roll-over,
while the isolation bitmap need to be flushed.

The idea "broadcast a TLBI for the pinned ASID when the task dies" you
mentioned below maybe can reuse the pinned bitmap. I've considered this idea
too, I think this method is not as good as the current two bitmap method:

1. This will introduce some TLBI jitter, and maybe increase the contention
of spinlock when updating the pinned bitmap, which we don't want the jitter
on the isolation CPU.

2. Another disadvantage is that if only one pinned bitmap is used and a large
number of processes are on the isolation domain but the processes are not dead,
the available ASIDs are insufficient. for example, more than 65536 processes
running or sleeping on the isolation CPU, how to handle this situation?

> its ASID pinned. If context switching only happens between pinned ASIDs
> on an isolated CPU, we may be able to avoid the lock even if the
> generation rolled over on another CPU.
>
> I think the tricky problem is when a pinned ASID task eventually dies,
> possibly after migrating to another CPU. If we avoided the TLBI on
> generation roll-over for the isolated CPU, it will have stale entries.
> One option would be to broadcast a TLBI for the pinned ASID when the
> task dies, though this would introduce some jitter. An alternative may
> be to track whether a pinned ASID ever run on a CPU and do a local TLBI
> for that ASID when a pinned thread is migrated.
>
> All these need a lot more thinking and (formal) modelling. I have a TLA+
> model but I haven't updated it to cover the pinned ASIDs. Or,
> alternatively, make the current code stand-alone and get it through CBMC
> (faking the spinlock as pthread mutexes and implementing some of the
> atomics in plain C with __CPROVER_atomic_begin/end).
>