Re: designing fast, streaming disk i/o with mmap: help wanted

From: Manfred Spraul (manfreds@colorfullife.com)
Date: Sat Apr 01 2000 - 13:11:40 EST


From: "Paul Barton-Davis" <pbd@Op.Net>
>
> But every so often, it fails, with either an mmap(2) call that takes
> 10's of msecs or a copy that takes the same, or both. This leads to
> missed deadlines reading from the soundcard.
>
> My guess is that the VM system decided to throw out some of the pages
> that the butler thread faulted in, and when the audio thread finally
> comes to use them, it has to refault them back into RAM. Right now, I
> am typically using 64K * 24 tracks = 1MB of RAM for the read-ahead,
> and at any point in time, 32K for each track currently playing back
> (i map and unmap the 32K immediately). I would not have expected such
> small amounts of RAM to have been the subject of page "ejection".
>

Perhaps this is a "soft pagefault", and the mmap semaphore was busy?
The VM system often removes pages from a processes address space, otherwise
the aging algorithms won't work. The soft page fault should be very fast,
but if another thread caused a real pagefault, then both threads will wait
for the actual disk io. In a artifical test, this caused a 30% slow-down.

I would use multiple butler processes, a mlock:
* butler process: one for each channel, they mlock() the next few 100kB of
the audio channel into their address space. One mlock() should be much
faster than the individual page faults.
* the butler thread mlocks the data into the process of the sound thread.
This should be very fast, perhaps you need only one thread.

If you run on a recent 2.3 kernel, use madvise().

The read-ahead for mmap in 2.2 is a static read-ahead: the code always reads
(1<<page_cluster) pages, and page_cluster is a global constant based on the
total system memory (you can also change it with "echo 10 >
/proc/sys/vm/page-cluster", but you must be root)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Apr 07 2000 - 21:00:07 EST