Hello,
I'd appreciate your patience to read this slightly long mail and answer.
I am performing some throughput tests on the Filesystem. I basically have
three scenarios.
1. A user process reading and writing to the file (using normal read,
write calls) - The simplest case.
2. Input and Output files mapped to the user space (using mmap) and just
mem-copying from input to the output file. - According to an example in
Richard Stevens "Advanced Programming in the UNIX environment"
3. My own code which basically does the transfer at the buffer cache level
(I read a buffer using generic_file_read() and use ext2_file_write() to
write it)
I find case 2 better than the rest (because of which I am slightly
confused) and case 3 better than case 1 (which is expected).
My question is:
---------------
Why is case 3 not on par with case 2?
I understand that in case 3, I read a page at a time (as generic_file_read
reads 4 BLOCKSIZE'd chunks into a page and ext2_file_write writes 1
BLOCKSIZE at a time).
But, mmap() and memcpy() should ultimately result in the same calls
at the buffer cache level, right?
So, how is it I am getting better throughput in the mmap case? Is there
something more to it, like some caching effects etc.? How does file mmap
work, in that case?
or am I missing something here?
Could someone please shed some light in this matter.
Thanks,
Pramodh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sun Apr 23 2000 - 21:00:10 EST