Re: [RFC v4 PATCH 3/5] mm/rmqueue_bulk: alloc without touching individual page structure

From: Mel Gorman
Date: Thu Oct 18 2018 - 07:21:01 EST


On Wed, Oct 17, 2018 at 10:23:27PM +0800, Aaron Lu wrote:
> > RT has had problems with cpu_relax in the past but more importantly, as
> > this delay for parallel compactions and allocations of contig ranges,
> > we could be stuck here for very long periods of time with interrupts
>
> The longest possible time is one CPU accessing pcp->batch number cold
> cachelines. Reason:
> When zone_wait_cluster_alloc() is called, we already held zone lock so
> no more allocations are possible. Waiting in_progress to become zero
> means waiting any CPU that increased in_progress to finish processing
> their allocated pages. Since they will at most allocate pcp->batch pages
> and worse case are all these page structres are cache cold, so the
> longest wait time is one CPU accessing pcp->batch number cold cache lines.
>
> I have no idea if this time is too long though.
>

But compact_zone calls zone_wait_and_disable_cluster_alloc so how is the
disabled time there bound by pcp->batch?

> > disabled. It gets even worse if it's from an interrupt context such as
> > jumbo frame allocation or a high-order slab allocation that is atomic.
>
> My understanding is atomic allocation won't trigger compaction, no?
>

No, they can't. I didn't check properly but be wary of any possibility
whereby interrupts can get delayed in zone_wait_cluster_alloc. I didn't
go back and check if it can -- partially because I'm more focused on the
lazy buddy aspect at the moment.

> > It may be necessary to consider instead minimising the number
> > of struct page update when merging to PCP and then either increasing the
> > size of the PCP or allowing it to exceed pcp->high for short periods of
> > time to batch the struct page updates.
>
> I don't quite follow this part. It doesn't seem possible we can exceed
> pcp->high in allocation path, or are you talking about free path?
>

I'm talking about the free path.

> And thanks a lot for the review!

My pleasure, hope it helps.

--
Mel Gorman
SUSE Labs