Re: [PATCH v2 06/13] fork: Add generic vmalloced stack support

From: Andy Lutomirski
Date: Tue Jun 21 2016 - 13:01:32 EST


On Tue, Jun 21, 2016 at 1:46 AM, Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> On Mon 20-06-16 09:13:55, Andy Lutomirski wrote:
>> On Mon, Jun 20, 2016 at 6:36 AM, Michal Hocko <mhocko@xxxxxxxxxx> wrote:
>> > On Fri 17-06-16 13:00:42, Andy Lutomirski wrote:
>> >> If CONFIG_VMAP_STACK is selected, kernel stacks are allocated with
>> >> vmalloc_node.
>> >
>> > I like this! It also reduces demand for higher order (order-2) pages
>> > considerably which is a great plus on its own. I would be little bit
>> > worried about the performance because vmalloc wasn't the fastest one
>> > AFAIR. Have you tried to measure that?
>>
>> It seems to add about 1.5ç to pthread_create+join on my laptop. (On
>> an unmodified, stripped-down kernel, it took about 7ç before. On a
>> Fedora system, the baseline is much worse.) I think that most of the
>> overhead is because vmalloc allocates one page at a time, which means
>> that it won't use a higher order page even if one is sitting on a
>> freelist.
>
> I guess a less artificial test case which would would generate a lot of
> tasks and some memory pressure would be more representative (e.g.
> kernbench). The thing is that even order-2 pages might get quite
> expensive when the memory is fragmented.
>
>> I can imagine better integration with the page allocator in which
>> higher order pages are used if readily available. Similarly, vfree
>> could free pages that happen to be aligned and consecutive as a unit
>> to avoid the overhead of merging them back together one at a time.
>>
>> But I'm not planning on doing any of this myself any time soon. I
>> just want to get the code working and merged.
>
> I agree, there is a room for improvement but no necessarily as a part of
> this series.
>

Agreed. My goal is to get this good enough for upstream, and we can
make it even better down the road.

That being said, I think I will implement Linus' suggestion of a tiny
percpu cache.

--Andy