Re: [PATCH] mm/hotplug: Adjust shrink_zone_span() to keep the old logic

From: David Hildenbrand
Date: Thu Feb 06 2020 - 05:05:52 EST


On 06.02.20 11:02, David Hildenbrand wrote:
> On 06.02.20 11:00, Baoquan He wrote:
>> On 02/06/20 at 10:48am, David Hildenbrand wrote:
>>> On 06.02.20 10:35, Baoquan He wrote:
>>>> On 02/06/20 at 09:50am, David Hildenbrand wrote:
>>>>> On 06.02.20 06:39, Baoquan He wrote:
>>>>>> In commit 950b68d9178b ("mm/memory_hotplug: don't check for "all holes"
>>>>>> in shrink_zone_span()"), the zone->zone_start_pfn/->spanned_pages
>>>>>> resetting is moved into the if()/else if() branches, if the zone becomes
>>>>>> empty. However the 2nd resetting code block may cause misunderstanding.
>>>>>>
>>>>>> So take the resetting codes out of the conditional checking and handling
>>>>>> branches just as the old code does, the find_smallest_section_pfn()and
>>>>>> find_biggest_section_pfn() searching have done the the same thing as
>>>>>> the old for loop did, the logic is kept the same as the old code. This
>>>>>> can remove the possible confusion.
>>>>>>
>>>>>> Signed-off-by: Baoquan He <bhe@xxxxxxxxxx>
>>>>>> ---
>>>>>> mm/memory_hotplug.c | 14 ++++++--------
>>>>>> 1 file changed, 6 insertions(+), 8 deletions(-)
>>>>>>
>>>>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>>>>> index 089b6c826a9e..475d0d68a32c 100644
>>>>>> --- a/mm/memory_hotplug.c
>>>>>> +++ b/mm/memory_hotplug.c
>>>>>> @@ -398,7 +398,7 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone,
>>>>>> static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
>>>>>> unsigned long end_pfn)
>>>>>> {
>>>>>> - unsigned long pfn;
>>>>>> + unsigned long pfn = zone->zone_start_pfn;
>>>>>> int nid = zone_to_nid(zone);
>>>>>>
>>>>>> zone_span_writelock(zone);
>>>>>> @@ -414,9 +414,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
>>>>>> if (pfn) {
>>>>>> zone->spanned_pages = zone_end_pfn(zone) - pfn;
>>>>>> zone->zone_start_pfn = pfn;
>>>>>> - } else {
>>>>>> - zone->zone_start_pfn = 0;
>>>>>> - zone->spanned_pages = 0;
>>>>>> }
>>>>>> } else if (zone_end_pfn(zone) == end_pfn) {
>>>>>> /*
>>>>>> @@ -429,10 +426,11 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
>>>>>> start_pfn);
>>>>>> if (pfn)
>>>>>> zone->spanned_pages = pfn - zone->zone_start_pfn + 1;
>>>>>> - else {
>>>>>> - zone->zone_start_pfn = 0;
>>>>>> - zone->spanned_pages = 0;
>>>>>> - }
>>>>>> + }
>>>>>> +
>>>>>> + if (!pfn) {
>>>>>> + zone->zone_start_pfn = 0;
>>>>>> + zone->spanned_pages = 0;
>>>>>> }
>>>>>> zone_span_writeunlock(zone);
>>>>>> }
>>>>>>
>>>>>
>>>>> So, what if your zone starts at pfn 0? Unlikely that we can actually
>>>>> offline that, but still it is more confusing than the old code IMHO.
>>>>> Then I prefer to drop the second else case as discussed instead.
>>>>
>>>> Hmm, pfn is initialized as zone->zone_start_pfn, does it matter?
>>>> The impossible empty zone won't go wrong if it really happen.
>>>>
>>>
>>> If you offline any memory block that belongs to the lowest zone
>>> (zone->zone_start_pfn == 0) but does not fall on a boundary (so that you
>>> can actually shrink), you would mark the whole zone offline. That's
>>> broken unless I am missing something.
>>
>> AFAIK, the page 0 is reserved. No valid zone can start at 0, only empty
>> zone is. Please correct me if I am wrong.
>
> At least on x86 it indeed is :) So if this holds true for all archs
>
> Acked-by: David Hildenbrand <david@xxxxxxxxxx>
>
> Thanks!
>
>

Correction

Nacked-by: David Hildenbrand <david@xxxxxxxxxx>

s390x:
[linux1@rhkvm01 ~]$ cat /proc/zoneinfo
Node 0, zone DMA
per-node stats
[...]
node_unreclaimable: 0
start_pfn: 0

--
Thanks,

David / dhildenb