Re: real-time threaded IO with low latency (audio)

David Olofson (audiality@swipnet.se)
Sun, 25 Jul 1999 01:38:12 +0200


Oliver Xymoron wrote:
(...)
> Assuming you read down to the part about prefaulting the stack, then yes,
> it'll do what you want. But don't expect your process to have bounded
> performance when the rest of the system is loaded to the point of
> thrashing.

Well, what I'm looking for is a way to use RTLinux as a high priority RT
scheduler that can switch to user context in order to execute code while
still in RT context with regard to timing. In that case, Linux can even
be completely frozen! RT tasks would still run, just as is the case with
RTLinux, provided EVERYTHING is really in locked memory.

> > > On an unloaded system, yes, you can be clever with mlock() and pre-fault
> > > everything before going into your critical code, but the people who are
> > > begging for RT performance for multimedia stuff don't understand that it
> > > means running with basically no load and giving up tons of memory and not
> > > touching the disk, etc..
> >
> > Unloaded system? Not good enough. How to do multitrack hard disk
> > recording then?
>
> Memory buffers. Not hard at all. The critical part is already handled in
> the sound drivers. The problem is when you want to mix stuff, filter, and
> send it back out in real-time.

Off course! Very simple indeed. And the drivers can manage to do what
they need simply because the HARDWARE has enough buffering to cope with
the interrupt latencies.

However, that's not good enough for high end audio processing and other
tasks that require BOTH deterministic timing AND lots of CPU time. The
longer time a RT process needs to do it's work, the less scheduling
jitter can be tolerated.

For the (hypotetical - not possible even on a dedicate DSP) extreme case

buffer_play_time == processing_time == buffering_time

no jitter at all is acceptable.

That's why I decided to use RTL for Audiality. Currently, there's no
other way of achieving the performance I want in any other way without
dedicated hardware. Standard Linux interrupts can hardly handle the kind
of latency I want without doing any processing - let alone getting near
the 80% or so I'd like...

> While you may be able to do that with a
> couple channels already and get acceptable results, don't expect this to
> scale in application space on the current generation of machines.

Why not? What makes mlock()ed memory in Linux user space so very
different from kmalloc()ed memory when it's all setup and the tasks are
running? Maybe I missed something here... (?)

//David

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/