Re: [BUILD-FAILURE] linux-next: Tree for June 30

From: H. Peter Anvin
Date: Mon Jun 30 2008 - 16:43:22 EST


H. Peter Anvin wrote:
Sam Ravnborg wrote:
ah, ok. So the patch below should solve this for now?

is there any particular reason why we are limited to 100 sections? (is there some ELF limitation here perhaps?)

I would still like to know if you see significant different numbers than Kamalesh.
If you see a number close to 100 then OK.
But if you see a number say in the range of below 80 then we should dive deeper into this.

I do not even know what the program does - never looked at it befoe
so why the original limit was 100 I dunno.


It looks to me that the people who did the relocatable kernel code just put in a magic number. There is certainly no inherent reason for this limit.

What's really ugly is that this is in a host-space program! It would have been one thing if it had been in a piece of code run in a restricted environment, e.g. in the decompressor, but this one runs in user space on the build environment.

The quick solution is to change this number to something obscenely big (say 10000, but even that could be an issue if we end up doing stuff like section per function); the proper solution is to turn these arrays into a structure and allocate the array dynamically.

Here is a quick patch to just change the number; I'll take a quick pass to see how much work it'd be to allocate it dynamically.

-hpa diff --git a/arch/x86/boot/compressed/relocs.c b/arch/x86/boot/compressed/relocs.c
index edaadea..9daca63 100644
--- a/arch/x86/boot/compressed/relocs.c
+++ b/arch/x86/boot/compressed/relocs.c
@@ -10,7 +10,7 @@
#define USE_BSD
#include <endian.h>

-#define MAX_SHDRS 100
+#define MAX_SHDRS 10000
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
static Elf32_Ehdr ehdr;
static Elf32_Shdr shdr[MAX_SHDRS];