Re: More than 1M open file descriptors

From: Eric Dumazet
Date: Wed May 19 2010 - 11:33:40 EST


Le mercredi 19 mai 2010 Ã 10:52 -0400, John Masinter a Ãcrit :
> I'm a software engineer and work with a fortune-500 company. We have a
> line of linux-powered network security appliances and I am responsible
> for Linux / OS development.
> Our multi-core appliance handles a large number of low-bandwith tcp
> connections, and after years of performance streamlining, we have hit
> the kernel's million tcp connection limit.
> Is there a kernel hack to increase the file descriptor limit beyond 1M
> to that we may have more than 1M open tcp connections? If this is
> discussed elsewhere, please point me in the right direction. Many
> thanks to everyone that contributes to open source.
> --

Its not a problem with a recent kernel (2.6.25)

commit 9cfe015aa424b3c003baba3841a60dd9b5ad319b
Author: Eric Dumazet <dada1@xxxxxxxxxxxxx>
Date: Wed Feb 6 01:37:16 2008 -0800

get rid of NR_OPEN and introduce a sysctl_nr_open

NR_OPEN (historically set to 1024*1024) actually forbids processes to open
more than 1024*1024 handles.

Unfortunatly some production servers hit the not so 'ridiculously high
value' of 1024*1024 file descriptors per process.

Changing NR_OPEN is not considered safe because of vmalloc space potential
exhaust.

This patch introduces a new sysctl (/proc/sys/fs/nr_open) wich defaults to
1024*1024, so that admins can decide to change this limit if their workload
needs it.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/