Re: real-time threaded IO with low latency (audio)

Steve Underwood (steveu@netpage.com.hk)
Mon, 26 Jul 1999 02:34:23 +0000


David Olofson wrote:

> Bill Huey wrote:
> >
> > "Wierd Swap Modifications"
> >
> > > Sounds like a very hard thing to get working efficiently... And very
> > > limited in usefullness. What if you're handling several stream? (Like
> > > multitrack audio :-) How would that be handled efficiently? If you just
> > > interleave the data, you'll end up with a useless structure on the disk,
> > > while the swap can hardly be expected to take care of you writing to
> > > several buffers at once.
> >
> > What I was pondering here was trying to get around the normal file
> > system code yet still have some kind of access to the disk for dumping
> > large streams to swap.
>
> Hmmm... My feeling is that it would be a better idea trying to get at
> the disk in a more direct way that via the swap. The Linux/RTL driver
> model I'm working on might help here, as it allows drivers to be used
> from Linux or RTL transparently. The sync methods used depend on which
> subsystem opens the decive, and who's calling the fops. (Note: The
> drivers won't use RTL directly, and my intention is that modified
> drivers should work on standard kernels.)
>
> However, I think this is overkill for most applications. It's meant for
> making drivers capable of doing hard real time I/O when used by RTL
> apps...
>
> > This is only for the initial realtime recording of the stream, so the
> > data stream will grow upward linearly.
>
> So, what you need is instant response to "start recording"? If so, a
> suffuciently large circular buffer in mlock()ed memory should be enough,
> I think. Have a lower priority disk I/O task do the writing to disk.
>
> I think I'm missing the problem here, really... Bandwidth?
>
> > Playback is another issue and probably will involve post processing/sorting
> > the freshly recorded data stream to there respective tracks, etc...
> >
> > I'm not sure of the implications and viability of the idea.
> >
> > Writing to a raw parition probably will work without the complications
> > above. It seemed like a cool idea at the time I was thinking about it.
>
> Perhaps a new, "multimedia" file system would be a nicer way to do the
> accessing. However, using some existing system and tell it to just use
> huge blocks (say 128 kB or something) to keep seek overhead due to
> fragmentation down, would be even nicer, if possible. I'm not very much
> into the inner workings of Linux file systems (yet), but I'll get there
> sooner or later... I have to solve the problem of writing tens of files
> at the same time while keeping seeks/second around 20 or lower. (Wasting
> transfer rate otherwise.)
>
> > What do you think after more elaboration on the idea ?
>
> I'm not sure... I can't see why you'd need anything more than mlock()
> and possibly a more efficient way of writing high bandwidth streams to
> disk. The disk I/O shouldn't involve any real time related problems at
> all, unless you simply need a faster drive or more efficient ways of
> writing to it. (Hint: If the drive is rattling a lot when your program
> is running, you have some fixing to do... A correctly configured data
> stream recorder of this kind just ticks a little every now and then.)
>
> > > Not-so-critical means? Usually, audio is a bigger problem than video
> > > when it comes to real time scheduling jitter. 60 Hz frame rate means 17
> >
> > What I'm doing is precomputed stuff. It's playback only involving MP3 or
> > some other compression scheme that I've got the source to.
> >
> > This thing I'm writting is initially under the BeOS. It's designed to
> > be interact with external analog devices like a joystick and possibly
> > other controllers.
> >
> > It's a pretty light weight app overall.
>
> Ok, no extreme bandwidth and/or real time requirements then? This
> stressing-the-disk-to-death-seeking problem is the most common reason
> for performance problems in this kind of applications (apart from the
> real time stuff for low latency processing), so you might start looking
> at how data is written. How large chucks at a time? How many "write
> points" to keep track of? (These two have a VERY interesting relation!)
> My other tasks be accessing other partitions on the same drive? (Read:
> Always use a separate drive for recording, unless you run you entire
> system in RAM without swap.)
>
> I have general question here: How does ext2fs handle multiple files
> being written to at once? How large buffering? How many files before
> seek overhead kills the total transfer rate? This is a VERY significant
> problem with most file systems, and usually has to be fixed by the
> applications. (Which off course makes running multiple recording apps in
> parallel useless, unless the extra buffering is done by a single
> daemon...)
>
> Coments, facts, figures?

It sound like the problem is being structured in the wrong way. If you want a 32
channel mixer why use 32 files? That's bound to cause major seek overheads,
whatever the file system. Why not have one file, with 32 audio streams
interleaved inside it. If you want just one or two channels, there is lots of
hopping around, but the system load is light at that time. When you want all 32
channels at once you read or write a stream with very few seeks. 34 interleaved
sections might be even better. That way you have room for a final stereo
mix-down too.

Steve

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/