Re: NR_OPEN vs OPEN_MAX vs RLIMIT_NOFILE, and a glibc bug?

Andreas Schwab (schwab@issan.informatik.uni-dortmund.de)
29 Mar 1999 11:43:39 +0200


Olaf Titz <olaf@bigred.inka.de> writes:

|> > Both getdtablesize and RLIMIT_NOFILE are only relevant for newly opened
|> > descriptors. See f.ex. the manual page on SunOS4:
|> >
|> > The call getdtablesize() returns the current value of the
|> > soft limit component of the RLIMIT_NOFILE resource limit.
|> > This resource limit governs the maximum value allowable as
|> > the index of a newly created descriptor.
|>
|> This implies that the common usage of getdtablesize() to find out the
|> highest numbered fd is wrong to begin with, since a process could open
|> (or inherit!) a fd and subsequently lower RLIMIT_NOFILE. It also
|> suggests it is not possible to find out the highest numbered fd at
|> all. (And using the _soft_ limit makes it sounds yet more questionable
|> IMHO.)

Well, this is rather unusual, but you seem to be right.

|> Net result: apart from using /proc (which is Linux-specific) there is
|> no reliable way to close all open fds and the traditional way using
|> the loop like in ciped is bogus. Right?

Yes, but apart from csh and some daemons, who needs to do a blanket close
anyway?

-- 
Andreas Schwab                                      "And now for something
schwab@issan.cs.uni-dortmund.de                      completely different"
schwab@gnu.org

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/