RE: Minimum time slice for relaible Linux execution

From: Steven Rostedt
Date: Fri Apr 01 2011 - 09:32:13 EST


On Fri, 2011-04-01 at 13:39 +0100, limp wrote:
> Thank you guys for your responses,
>
> To be honest I havenât looked in detail how RTAI and Xenomai does it but
> AFAIK, they don't give a fixed time slice to Linux either (i.e. they switch
> To Linux only when they have finished with their RT tasks).

Perhaps you should look into more detail, maybe they do more than you
expect. Honestly, I haven't looked into detail of what they do either,
so I can not comment on how they work.

>
> A difference between their implementation and mine is that I don't acknowledge
> any Linux interrupt while the RT domain is executed so maybe, if Linux code
> is not smart enough to re-issue a lost interrupt, and if the RT domain takes most
> of CPU time starving Linux, this can cause Linux to crash at some point.

What exactly do you mean by not acknowledging Linux interrupts? If an
interrupt takes place while an RT domain is running, you simply drop it?
Yes that will break things. How will Linux know to reissue an interrupt
for a network packet coming in if it never knew it happened?

If your microkernel stores off the interrupt and reissues it to Linux
when Linux gets a chance to run again, then everything would work.
That's pretty much what the virtualization code does.

>
> The idea of not acknowledging Linux interrupts on RT domain is that I don't
> want to add *random* overhead into RT tasks execution.

Or do you simply mask the interrupts that the RT domain does not care
about when the RT domain runs? This should work as when you unmask them
they should trigger, and then you can pass it to the Linux irq handlers.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/