Re: A peek at the future of storage

From: Daniel Phillips
Date: Wed Dec 12 2007 - 13:02:55 EST


On Wednesday 12 December 2007 09:46, J. Bruce Fields wrote:
> On Wed, Dec 12, 2007 at 08:46:18AM -0800, Daniel Phillips wrote:
> > Incidentally, we ran our tests with 128 knfsd threads. The default
> > of 8 threads produces miserable performance on the SSD, which gave
> > us a good scare on our initial test run. It would be very nice to
> > implement an algorithm to scale the knfsd thread pool
> > automatically, in order to eliminate this class of thing that can
> > go wrong. If somebody became inspired to take on that little
> > project that would be great, otherwise it is in our pipeline for,
> > hmm, Christmas delivery. (Exactly which Christmas is left
> > unspecified.)
>
> People have proposed writing a daemon that just reads
> /proc/net/rpc/nfsd periodically and uses that to adjust the number of
> threads from userspace, probably subject to some limits in a config
> file someplace. (Think that could do the job, or is there some reason
> this would be easier in the kernel?)

I didn't actually say "kernel", though that was what I was thinking,
perhaps just out of habit. It seems to me it would be a relatively
small change to the existing code, essentially just finishing the idea,
without needing to be patched up by userspace.

So how would a userspace daemon know that kernel is blocking and new
threads are needed? In kernel this is pretty easy: when a new request
arrives, look on the thread list and if none are available, generate a
new one. Something special needs to be done to handle the case where
there are no threads available because they are all piled up on a
semaphore due to, for example, somebody unplugging the network cable
for a remote disk. We have to avoid generating infinite threads in
that case. Ideas?

Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/