[PATCH -v3 14/14] x86, mm: Map ISA area with connected ram range at the same time

From: Yinghai Lu
Date: Wed Sep 05 2012 - 01:48:16 EST


so could reduce one loop.

Signed-off-by: Yinghai Lu <yinghai@xxxxxxxxxx>
---
arch/x86/mm/init.c | 21 ++++++++++++++-------
1 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 6663f61..e69f832 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -248,20 +248,27 @@ static void __init walk_ram_ranges(
void *data)
{
unsigned long start_pfn, end_pfn;
+ bool isa_done = false;
int i;

- /* the ISA range is always mapped regardless of memory holes */
- work_fn(0, ISA_END_ADDRESS, data);
-
for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
u64 start = start_pfn << PAGE_SHIFT;
u64 end = end_pfn << PAGE_SHIFT;

- if (end <= ISA_END_ADDRESS)
- continue;
+ if (!isa_done && start > ISA_END_ADDRESS) {
+ work_fn(0, ISA_END_ADDRESS, data);
+ isa_done = true;
+ } else {
+ if (end < ISA_END_ADDRESS)
+ continue;
+
+ if (start <= ISA_END_ADDRESS &&
+ end >= ISA_END_ADDRESS) {
+ start = 0;
+ isa_done = true;
+ }
+ }

- if (start < ISA_END_ADDRESS)
- start = ISA_END_ADDRESS;
#ifdef CONFIG_X86_32
/* on 32 bit, we only map up to max_low_pfn */
if ((start >> PAGE_SHIFT) >= max_low_pfn)
--
1.7.7

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/