Re: uniform input device packets?

Vojtech Pavlik (vojtech-lists@twilight.ucw.cz)
Wed, 24 Jun 1998 11:23:31 +0200


On Wed, Jun 24, 1998 at 01:59:54AM -0400, Allanah Myles wrote:

> > The timestamp - it is needed, so that an application knows in what
> > order events happened, and what time apart they are. Imagine detecting
> > a double click with heavily loaded, and swapping machine. It might be
> > impossible without timestamps.

> Is this necessarily true?

Yes, it is.

> Your specific example of "system under load"
> really doesn't necessitate timestamping of events. You're supposing
> that when a system is loaded, it will always immediately process it's
> I/O events and preempt whatever is currently executing (probably
> whatever is causing the load at the moment).

Since the input device drivers would be in kernel, and since
they either would be interrupt-triggered (mice, keyboards,
...) or polled in timer interrupt (joysticks, ...) the
events would be recorded iby the driver with minimal
(interrupt) latency.

It would timestamp them and add them to a queue.

> You may very well be
> correct, but I have a funny feeling that the way things currently
> work is that the I/O queues up, and is drained at next chance.

Yes, but this queue would already contain events, with
timestamps.

> In
> which case, two single-clicks will arrive one after another and
> appear to be a double-click - I haven't verified this but I'm just
> guessing.

In this case, two single-clicks may arrive in the same
moment to the application, but by looking at the timestamps
the application will know that they happened some time one
after another and thus treat them correctly.

> If this *is* the case, then timestamping events in this new protocol
> will be unnecessary. Also - if the system *did* actually immediately
> process all input from devices, then a device generating spurious
> output could starve the rest of the system of cycles. This sounds
> like a serious problem, which is why I'm guessing the behavior is
> as I predicted.

Events from a device generating more output that
applications can't eat would be dropped once the its queue
becomes full. (There would be one queue per device.)

You're right that processing these events might eat up CPU
cycles, but, once the device generates interrupts, they have
to be handled anyway.

Vojtech

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu