Re: [PATCH 2/3] mm/memory.c: Update local TLB if PTE entry exists

From: maobibo
Date: Sat May 16 2020 - 05:43:36 EST




On 05/16/2020 04:40 AM, Andrew Morton wrote:
> On Fri, 15 May 2020 12:10:08 +0800 Bibo Mao <maobibo@xxxxxxxxxxx> wrote:
>
>> If there are two threads hitting page fault at the same page,
>> one thread updates PTE entry and local TLB, the other can
>> update local tlb also, rather than give up and do page fault
>> again.
>>
>> ...
>>
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -1770,8 +1770,8 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
>> }
>> entry = pte_mkyoung(*pte);
>> entry = maybe_mkwrite(pte_mkdirty(entry), vma);
>> - if (ptep_set_access_flags(vma, addr, pte, entry, 1))
>> - update_mmu_cache(vma, addr, pte);
>> + ptep_set_access_flags(vma, addr, pte, entry, 1);
>> + update_mmu_cache(vma, addr, pte);
>
> Presumably these changes mean that other architectures will run
> update_mmu_cache() more frequently than they used to. How much more
> frequently, and what will be the impact of this change? (Please fully
> explain all this in the changelog).
>
It is only useful for those architects where software can update tlb, if the function update_mmu_cache is used for other reason, it will bring out somewhat impact, and I will explain it in the changelog.

>> }
>> goto out_unlock;
>> }
>>
>> ...
>>
>> @@ -2463,7 +2462,8 @@ static inline bool cow_user_page(struct page *dst, struct page *src,
>> vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
>> locked = true;
>> if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) {
>> - /* The PTE changed under us. Retry page fault. */
>> + /* The PTE changed under us, update local tlb */
>> + pdate_mmu_cache(vma, addr, vmf->pte);
>
> Missing a 'u' there. Which tells me this patch isn't the one which you
> tested!
>
Sorry about it, I will refresh the patch and add modification about this obvious typo

regards
bibo, mao
>> ret = false;
>> goto pte_unlock;
>> }
>>
>> ...
>>