Re: NFS: low read/stat performance on small files

From: Namjae Jeon
Date: Sat Mar 24 2012 - 03:03:37 EST


2012/3/23 Myklebust, Trond <Trond.Myklebust@xxxxxxxxxx>:
> On Fri, 2012-03-23 at 07:49 -0400, Jim Rees wrote:
>> Vivek Trivedi wrote:
>>
>> Â 204800 bytes (200.0KB) copied, 0.027074 seconds, 7.2MB/s
>> Â Read speed for 200KB file is 7.2 MB
>>
>> Â 104857600 bytes (100.0MB) copied, 9.351221 seconds, 10.7MB/s
>> Â Read speed for 100MB file is 10.7 MB
>>
>> Â As you see read speed for 200KB file is only 7.2MB/sec while it is
>> Â 10.7 MB/sec when we read 100MB file.
>> Â Why there is so much difference in read performance ?
>> Â Is there any way to achieve high read speed for small files ?
>>
>> That seems excellent to me. Â204800 bytes at 11213252 per sec would be 18.2
>> msec, so your per-file overhead is around 9 msec. ÂThe disk latency alone
>> would normally be more than that.
>
> ...and the reason why the performance is worse for the 200K file
> compared to the 100M one is easily explained.
>
> When opening the file for reading, the client has a number of
> synchronous RPC calls to make: it needs to look up the file, check
> access permissions and possibly revalidate its cache. All these tasks
> have to be done in series (you cannot do them in parallel), and so the
> latency of each task is limited by the round-trip time to the server.
>
> On the other hand, once you get to doing READs, the client can send a
> bunch of readahead requests in parallel, thus ensuring that the server
> can use all the bandwidth available to the TCP connection.
>
> So your result is basically showing that for small files, the proportion
> of (readahead) tasks that can be done in parallel is smaller. This is as
> expected.
>
> --
> Trond Myklebust
> Linux NFS client maintainer
>
> NetApp
> Trond.Myklebust@xxxxxxxxxx
> www.netapp.com
>

Dear Trond.
I agree your answer.
Thanks a lot for your specific explaination.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/