Re: [PATCH -V13 2/3] NUMA balancing: optimize page placement for memory tiering system

From: Huang, Ying
Date: Tue Mar 01 2022 - 01:48:11 EST


Miaohe Lin <linmiaohe@xxxxxxxxxx> writes:

> On 2022/2/21 16:45, Huang Ying wrote:
>> With the advent of various new memory types, some machines will have
>> multiple types of memory, e.g. DRAM and PMEM (persistent memory). The
>> memory subsystem of these machines can be called memory tiering
>> system, because the performance of the different types of memory are
>> usually different.
>>
>> In such system, because of the memory accessing pattern changing etc,
>> some pages in the slow memory may become hot globally. So in this
>> patch, the NUMA balancing mechanism is enhanced to optimize the page
>> placement among the different memory types according to hot/cold
>> dynamically.
>>
>> In a typical memory tiering system, there are CPUs, fast memory and
>> slow memory in each physical NUMA node. The CPUs and the fast memory
>> will be put in one logical node (called fast memory node), while the
>> slow memory will be put in another (faked) logical node (called slow
>> memory node). That is, the fast memory is regarded as local while the
>> slow memory is regarded as remote. So it's possible for the recently
>> accessed pages in the slow memory node to be promoted to the fast
>> memory node via the existing NUMA balancing mechanism.
>>
>> The original NUMA balancing mechanism will stop to migrate pages if
>> the free memory of the target node becomes below the high watermark.
>> This is a reasonable policy if there's only one memory type. But this
>> makes the original NUMA balancing mechanism almost do not work to
>> optimize page placement among different memory types. Details are as
>> follows.
>>
>> It's the common cases that the working-set size of the workload is
>> larger than the size of the fast memory nodes. Otherwise, it's
>> unnecessary to use the slow memory at all. So, there are almost
>> always no enough free pages in the fast memory nodes, so that the
>> globally hot pages in the slow memory node cannot be promoted to the
>> fast memory node. To solve the issue, we have 2 choices as follows,
>>
>> a. Ignore the free pages watermark checking when promoting hot pages
>> from the slow memory node to the fast memory node. This will
>> create some memory pressure in the fast memory node, thus trigger
>> the memory reclaiming. So that, the cold pages in the fast memory
>> node will be demoted to the slow memory node.
>>
>> b. Make kswapd of the fast memory node to reclaim pages until the free
>> pages are a little more than the high watermark (named as promo
>> watermark). Then, if the free pages of the fast memory node reaches
>> high watermark, and some hot pages need to be promoted, kswapd of the
>> fast memory node will be waken up to demote more cold pages in the
>> fast memory node to the slow memory node. This will free some extra
>> space in the fast memory node, so the hot pages in the slow memory
>> node can be promoted to the fast memory node.
>>
>> The choice "a" may create high memory pressure in the fast memory
>> node. If the memory pressure of the workload is high, the memory
>> pressure may become so high that the memory allocation latency of the
>> workload is influenced, e.g. the direct reclaiming may be triggered.
>>
>> The choice "b" works much better at this aspect. If the memory
>> pressure of the workload is high, the hot pages promotion will stop
>> earlier because its allocation watermark is higher than that of the
>
> Many thanks for your path. The patch looks good to me but I have a question.
> WMARK_PROMO is only used inside pgdat_balanced when NUMA_BALANCING_MEMORY_TIERING
> is set. So its allocation watermark seems to be as same as the normal memory
> allocation. How should I understand the above sentence? Am I miss something?

Before allocating pages for promotion, the watermark of the fast node
will be checked (please refer to migrate_balanced_pgdat()). If the
watermark is going to be lower than the high watermark, promotion will
abort.

Best Regards,
Huang, Ying