Re: Scheduling Times --- Revisited

Richard Gooch (rgooch@atnf.csiro.au)
Tue, 29 Sep 1998 21:10:23 +1000


Larry McVoy writes:
> : > : application, but some of our RT applications have threads which run
> : > : for a very short time (read from blocking device, compute a new value
> : > : and write, place the recently read value into SHM, unlock a semaphore
> : > : and read again (block)).
> : >
> : > This is mistake #1 made by inexperienced coders in this area.
> :
> : So what do you suggest as an alternative? Scenario: 10 kHz interrupt
> : from device, new value needs to be written within 50 us. Device driver
> : reads data, wakes up RT process. RT process reads data from driver,
> : computes and writes new value, writes recent value to SHM and blocks
> : on read again.
> : SHM value is read at a much lower rate by low-priority threads. Some
> : of these can safely lag behind, so they don't even need RT priority.
> : Running on a 386DX33 where a switch takes 12.3 us (no extra processes
> : on the run queue) and each extra process on the run queue adds another
> : 7.4 us. Add interrupt latency, work to be done, syscall overheads and
> : interrupt disabling sections, and we're getting close to 50 us. Add
> : a few monitor threads (SCHED_OTHER), and we go past that.
>
> Modify the driver's interrupt routine to get the data, put it in a
> buffer, and wake up a user level process when the buffer gets full,
> meanwhile switching to a new buffer. I don't mean to be
> condescending, but this is really basic producer/consumer type event
> gathering. Haven't you ever done this before? I just assumed that
> anyone who has done kernel performance work has had to gather event
> data from the kernel - how else would you do it without completely
> disturbing other system activity?

No, no, I've described a different kind of problem. The RT process has
to write out new data to the hardware based on the data read in and
various state in the process. It's a feedback control loop. We can't
buffer up the read data. We'd end up driving the antenna into the
ground. Scratch a few million.

Do you see what I'm driving at? If it was just a soundcard I was
reading from (picking a trivial example), I'd agree with you.

> Here's an analogy for you: what you've been suggesting over the last
> week is quite similar to someone "discovering" that calling
> read(f,&c,1) in a tight loop doesn't let you read I/O at 50MB/sec.
> Rather than reading up on stdio or I/O buffering in general, this
> person gets all gung ho about "fixing" the read() system call. I'm
> just trying to tell this someone to go read about stdio - they'll be
> able to solve their problem without "fixing" the system.

Based on what you think the problem is, I'd agree. But I'm talking
about a different problem.

> You could, of course, make an argument that reducing the cost of
> read() is just a good thing, and I'd have to agree with that to a
> point. But you could "fix" read() all you wanted, and you would
> still never get performance as good as using stdio or some variant
> of stdio.

Definately buffering is good if you can do it. Feedback systems don't
fit this class of data gathering applications, though.

> In your case, you are so fixated on this perceived problem that you
> simply can't back up and realize that there are better approaches.
> My advice is to think hard about the buffering scheme I suggested
> for the driver and get on with your life.

The buffering scheme simply can't work in a feedback control loop.

> You'll be able to handle your load on a 286, imagine!

Except you can't run Linux on it ;-)

Aside: some of our embedded applications require the power (cough) of
a 386. I think we're at 50% capacity under some loads. A 286 wouldn't
cut it.

Regards,

Richard....

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/