Re: Strange interrupt behaviour

Linus Torvalds (torvalds@transmeta.com)
Sat, 11 Jul 1998 22:35:25 -0700 (PDT)


On Sun, 12 Jul 1998 ak@muc.de wrote:
>
> Maybe it is a "balancing issue" on machines with 128MB of memory, but on
> 8MB machines I see no way to get this to work without making it unacceptable
> slow or waste lots of memory (barring radical solutions like replacing the
> buddy page allocator with something that is less prone to fragmentation)

I can see one really _trivial_ way of handling this: make the page size be
8kB as far as the page allocator is concerned.

This has been suggested before for other reasons (kmalloc memory use, and
for NFS reasons - it woul dbe really good for NFS if the page cache used
8kB pages instead of 4kB pages), and should work fairly well.

Yes, I do know that the x86 actually has 4kB pages, but that's a small
hardware detail. The Linux memory management could _trivially_ be changed
to think that page tables have 512 entries of 8 bytes each (mapping 8kB
per entry) instead of 1024 entries of 4 bytes each.

The advantage would be bigger clusters for page-in, and swapout, and
generally better performance (one page fault would fault in two of our
current small pages).

However, the big downside (apart from slightly bigger memory use) is that
it impacts user level (mmap would also be 8kB-constrained). And that is
probably unacceptable.

So the sligtly uglier version of this is to do all page allocations as 8kB
chunks, and then the memory management layer has its own "sub-buddy" to
split a 8kB page into two hardware pages. The sparc already does something
like this for some of its page table pages, which are partial pages rather
than a full page like on a normal architecture. It wouldn't be too painful
to do this generically.

However, I'd prefer to still try out some other ways of handling this. For
example, "__get_free_pages()" currently only re-tries once. It shouldn't
be hard to make it re-try a few more times, and it might well be enough to
make the problem go away.

2.1.x is not going to be usable on 4MB machines. I didn't even have to
change the kernel for that - the distributions have made that abundantly
clear anyway. It may be that we will simply say that "hey, if you have a
486-8MB, then 2.0.x works better, and the new features of 2.1.x aren't
worth it for you".

One of the reasons I disliked Minix back when I used it was that it was
designed for a machine that was no longer current. I want new versions of
Linux to be optimized for new hardware, and I also think that it should be
acceptable to tell people that they can still use old kernel versions. I
got reports of people using Linux-1.0.x long into the 2.1.x development
tree, and they may still be out there for all I know. And to some degree
it is actually _good_ that people decide that they don't need/want to
upgrade to newer systems if their old setup is good enough for them.

Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html