Re: [patch][v2] swap: virtual swap readahead

From: Minchan Kim
Date: Fri Jun 05 2009 - 07:03:28 EST


Hi, Hannes.

On Wed, Jun 3, 2009 at 10:27 PM, Johannes Weiner<hannes@xxxxxxxxxxx> wrote:
> On Wed, Jun 03, 2009 at 01:34:57AM +0200, Andi Kleen wrote:
>> On Wed, Jun 03, 2009 at 12:37:39AM +0200, Johannes Weiner wrote:
>> > + *
>> > + * Caller must hold down_read on the vma->vm_mm if vma is not NULL.
>> > + */
>> > +struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
>> > + Â Â Â Â Â Â Â Â Â struct vm_area_struct *vma, unsigned long addr)
>> > +{
>> > + Â unsigned long start, pos, end;
>> > + Â unsigned long pmin, pmax;
>> > + Â int cluster, window;
>> > +
>> > + Â if (!vma || !vma->vm_mm) Â Â Â Â/* XXX: shmem case */
>> > + Â Â Â Â Â return swapin_readahead_phys(entry, gfp_mask, vma, addr);
>> > +
>> > + Â cluster = 1 << page_cluster;
>> > + Â window = cluster << PAGE_SHIFT;
>> > +
>> > + Â /* Physical range to read from */
>> > + Â pmin = swp_offset(entry) & ~(cluster - 1);
>>
>> Is cluster really properly sign extended on 64bit? Looks a little
>> dubious. long from the start would be safer
>
> Fixed.
>
>> > + Â /* Virtual range to read from */
>> > + Â start = addr & ~(window - 1);
>>
>> Same.
>
> Fixed.
>
>> > + Â Â Â Â Â pgd = pgd_offset(vma->vm_mm, pos);
>> > + Â Â Â Â Â if (!pgd_present(*pgd))
>> > + Â Â Â Â Â Â Â Â Â continue;
>> > + Â Â Â Â Â pud = pud_offset(pgd, pos);
>> > + Â Â Â Â Â if (!pud_present(*pud))
>> > + Â Â Â Â Â Â Â Â Â continue;
>> > + Â Â Â Â Â pmd = pmd_offset(pud, pos);
>> > + Â Â Â Â Â if (!pmd_present(*pmd))
>> > + Â Â Â Â Â Â Â Â Â continue;
>> > + Â Â Â Â Â pte = pte_offset_map_lock(vma->vm_mm, pmd, pos, &ptl);
>>
>> You could be more efficient here by using the standard mm/* nested loop
>> pattern that avoids relookup of everything in each iteration. I suppose
>> it would mainly make a difference with 32bit highpte where mapping a pte
>> can be somewhat costly. And you would take less locks this way.
>
> I ran into weird problems here. ÂThe above version is actually faster
> in the benchmarks than writing a nested level walker or using
> walk_page_range(). ÂStill digging but it can take some time. ÂBusy
> week :(
>
>> > + Â Â Â Â Â page = read_swap_cache_async(swp, gfp_mask, vma, pos);
>> > + Â Â Â Â Â if (!page)
>> > + Â Â Â Â Â Â Â Â Â continue;
>>
>> That's out of memory, break would be better here because prefetch
>> while oom is usually harmful.
>
> It can also happen due to a race with something releasing the swap
> slot (i.e. swap_duplicate() fails). ÂBut the old version did a break
> too and this patch shouldn't do it differently. ÂFixed.

I think it would be better to read fault page earlier than readahead pages.
That's because,
1) Readahead pages would prevent to read fault page due to out-of-memory.
2) If we can't get the fault page, we don't need extra pages(ie,
readahead pages)
It's waste of memory or IO bandwidth. It's what you want.
3) If we read fault page at first and meet oom, we can also stop readahead.

--
Kinds regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/