Re: How to increat [sic.] max open files?

Andi Kleen (andi@mlm.extern.lrz-muenchen.de)
02 Jan 1997 17:52:49 +0100


"James L. McGill" <fishbowl@fotd.netcomi.com> writes:

> On Thu, 2 Jan 1997, Marko Sepp wrote:
>
> > >I am trying to increase the maximum number of open files
> > >(currently 256). I use Linux 2.0.0 (slackware 96).
> > >I tried
> > >ulimit -n unlimited
> > >but, the message says
> > >ulimit: cannot raise limit: Operation not permitted.
> > >
> > >Does anybody know how I can increase the max open files?
> >
> > Try editing the /usr/include/linux/limits.h, simply change the value for
> > open files and recompile the kernel.
> >
> > Marko
>
>
> Er, NO. With as much attention as this issue has had in recent months,
> I am quite surprised that the kernel and libc code have not adopted increased
> filehandle support. There are still people saying that "256 filehandles
> should be enough for anyone." Isn't that attitude phiolosophically flawed,
> especially in the face of the people who do need e.g. this scaling factor?
>
> I use the following patch to get 2048 File Descriptors per process.
> Unfortunately, if I try to double that to 4096 FD's (which I really
> do need...) I get mysterious lockups. When I do this, I also build the
> following programs from source:

You get the lockups because the kernel select() (fs/select.c) routine
puts 6 copies of fd_set on the kernel stack. The kernel stack per
process on i386 is limited to 4K. With NR_OPEN of 4096, one fd_set is
512bytes. With 6*512bytes you have already overflowed the kernel stack
when a process uses select(). The proper solution would be that
sys_select() allocates the fd_set copies dynamically with kmalloc().
Someone has to hack this in.

BTW It's better to use NR_OPEN of 1024 (like Digital Unix or Solaris do).
With bigger values the FD_SET(), FD_ZERO() etc. macros in glibc won't work.

-Andi