Re: [PATCH 0/3] OOM detection rework v4

From: Vlastimil Babka
Date: Wed Jan 06 2016 - 07:44:11 EST


On 12/28/2015 03:13 PM, Tetsuo Handa wrote:
> Tetsuo Handa wrote:
>> Tetsuo Handa wrote:
>> > I got OOM killers while running heavy disk I/O (extracting kernel source,
>> > running lxr's genxref command). (Environ: 4 CPUs / 2048MB RAM / no swap / XFS)
>> > Do you think these OOM killers reasonable? Too weak against fragmentation?
>>
>> Since I cannot establish workload that caused December 24's natural OOM
>> killers, I used the following stressor for generating similar situation.
>>
>
> I came to feel that I am observing a different problem which is currently
> hidden behind the "too small to fail" memory-allocation rule. That is, tasks
> requesting order > 0 pages are continuously losing the competition when
> tasks requesting order = 0 pages dominate, for reclaimed pages are stolen
> by tasks requesting order = 0 pages before reclaimed pages are combined to
> order > 0 pages (or maybe order > 0 pages are immediately split into
> order = 0 pages due to tasks requesting order = 0 pages).

Hm I would expect that as long as there are some reserves left that your
reproducer cannot grab, there are some free pages left and the allocator should
thus preserve the order-2 pages that combine, since order-0 allocations will get
existing order-0 pages before splitting higher orders. Compaction should also be
able to successfully combine order-2 without racing allocators thanks to per-cpu
caching (but I'd have to check).

So I think the problem is not higher-order page itself, but that order-2 needs 4
pages and thus needs to pass a bit higher watermark, thus being at disadvantage
to order-0 allocations. Thus I would expect the order-2 pages to be there, but
not available for allocation due to watermarks.

> Currently, order <= PAGE_ALLOC_COSTLY_ORDER allocations implicitly retry
> unless chosen by the OOM killer. Therefore, even if tasks requesting
> order = 2 pages lost the competition when there are tasks requesting
> order = 0 pages, the order = 2 allocation request is implicitly retried
> and therefore the OOM killer is not invoked (though there is a problem that
> tasks requesting order > 0 allocation will stall as long as tasks requesting
> order = 0 pages dominate).
>
> But this patchset introduced a limit of 16 retries. Thus, if tasks requesting
> order = 2 pages lost the competition for 16 times due to tasks requesting
> order = 0 pages, tasks requesting order = 2 pages invoke the OOM killer.
> To avoid the OOM killer, we need to make sure that pages reclaimed for
> order > 0 allocations will not be stolen by tasks requesting order = 0
> allocations.
>
> Is my feeling plausible?
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@xxxxxxxxxx For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/