Re: CLOCK_MONOTONIC datagram timestamps by the kernel

From: John
Date: Mon Feb 26 2007 - 09:17:19 EST


Andi Kleen wrote:

John wrote:

My app expects a stream of UDP datagrams containing compressed video.
These datagrams have been time stamped by the source using a high
resolution clock.

Why do you need another time stamp then?

Source and receiver do not share a common clock. Thus, I have to monitor clock skew. If I simply use the time stamps produced by the source, my buffer will eventually underflow or overflow.

As far as I have seen, clock skew is on the order of ±10^-5, i.e. when the source thinks 100000 seconds have elapsed, the receiver thinks 100001 seconds have elapsed.

The video stream is played out in real-time. I buffer several packets,
then repeatedly arm a one-shot timer (with TIMER_ABSTIME set) to wait
until it's time to send the oldest packet.

AFAIU, if I use the CLOCK_REALTIME clock in my app, and if the sysadmin changes the date, hell will break loose in my app.

Only if your app cannot handle time going backwards.

My app cannot handle time going backwards.

The algorithm looks like:

/* Start transmission, record start date */
NEXT_DEADLINE = now;

while ( 1 )
{
send(oldest_packet);
recv_available_packets();
/* compute next deadline */
NEXT_DEADLINE += time_to_wait_until_next_packet();
sleep_until(NEXT_DEADLINE);
}

If the clock changes under me, I'm toast.

I suppose that if I change sock_get_timestamp() in core/sock.c to call __get_realtime_clock_ts() instead of do_gettimeofday() I'll break 50 billion applications (ping? traceroute?) that expect a struct timeval?

Not that many, but some probably.

Another option would be to change sock_get_timestamp() to call ktime_get_ts() instead of do_gettimeofday() and convert ns to us.

i.e. I still return a struct timeval, but using CLOCK_MONOTONIC.

Letting the kernel do the time stamping is usually quite useless anyways. You can as well do it in your application when you receive it. After all
you're interested in when you can read the packet in your app, not when the driver processes it.

I use clock_nanosleep() to sleep until it's time to send the next packet. I check for packets when I wake up. I need the kernel to time stamp the packets because I was sleeping when they reached the system.

I try to do it this way because I read this:

http://rt.wiki.kernel.org/index.php/High_resolution_timers

"Note that (clock_)nanosleep functions do not suffer from this problem as the wakeup function at timer expiry is executed in the context of the high resolution timer interrupt. If an application is not using asynchronous signal handlers, it is recommended to use the clock_nanosleep() function with the TIMER_ABSTIME flag set instead of waiting for the periodic timer signal delivery. The application has to maintain the absolute expiry time for the next interval itself, but this is a lightweight operation of adding and normalizing two struct timespec values. The benefit is significantly lower maximum latencies (~50us) and less OS overhead in general."

Regards.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/