In linux.dev.kernel, article <5andlm$ikg@palladium.transmeta.com>,
hpa@transmeta.com (H. Peter Anvin) writes:
> By author: yuri@rgti.com (yuri mironoff)
> >
> > Forgive my ignorance but why not implement dynamic file descriptor
> > allocation??? All these arguments about a maximum NR_OPEN would then
> > become inconsequential.
>
> It ain't so easy, and it's BSD's fault. select() has a very nice way
> to ensure that a bitmask of file descriptors is the right size, but
> the FD_* support routines immediately bungles that very nice idea by
> effectively requiring the maximum number of file descriptors
> (technically, the largest possible numerical value of a file
> descriptor) to be known at compile time. There is no way in C to
I just found this in the Digital Unix man page for select:
NOTES
[Digital] Although the getdtablesize() function is intended to allow users
to write programs independently of the kernel limit on the number of open
files, the dimensioning of a sufficiently large bit field for select()
remains a problem. FD_SETSIZE is set the current kernel limit on the per-
mitted number of open files (as specified by OPEN_MAX_SYSTEM). To accommo-
date programs that need to specify alternate fd_set sizes, it is possible
to specify an alternative value for FD_SETSIZE before including the
sys/time.h header file.
Our sys/time.h files includes linux/time.h which unconditionally defines
FD_SETSIZE, so this will not work under Linux. It should be trivial to
change this, and its probably a good idea. It will not fix all problems,
but it would allow people a more portable way of using large numbers of
file descriptors.
Anyway, just thought I would throw this out.
Thanks,
Jim
PS. I tried to write a program to test select() for the person who produced
the kernel patch to increase its limits. My testprogram does not work,
and I can't figure out why. I know its something stupid, but I don't
see it. If you are good at these things and want to help me out,
take a look at:
http://www.acs.uncwil.edu/~jlnance/fdtest.c