Re: time_t size: The year 2038 bug?

From: Johan Kullstam (kullstam@ne.mediaone.net)
Date: Wed Jan 05 2000 - 17:46:31 EST


"Richard B. Johnson" <root@chaos.analogic.com> writes:

> On Wed, 5 Jan 2000 doug@springer.net wrote:
>
> > I have tentatively searched the linux kernel archives for this subject,
> > but haven't found anything. I have heard that there is talk about
> > moving this to 64 bits. Has anyone considered this (stupid
> > question) and who else is interested in this subject. It seems to be
> > a kernel and a glibc issue. Since I am not currently subscribed to
> > the list, please cc doug@springer.net for any replies. When time
> > allows, I intend to experiment with the 64 bit thing and see what
> > breaks.
> > -Doug Springer
> >
>
> If everything that references time_t size variables (in all applications
> and the OS) are changed, and the applications recompiled, of course
> nothing will break. However, the problem remains with applications that
> think that struct timeval { long tv_sec, long tv_usec } is how it works.
> These applications will have to be fixed. Since ix86 machines are not
> 64-bits, summation of a 64-bit (long long) is a bit of a problem. I have
> heard that the current 'C' compilers doesn't do a very good job.
>
> Even incrementing a 64-bit timer tick is a problem because an increment,
> past zero doesn't set the carry flag. You need to do this:
>
> addl $1, low_word
> adcl $0, high_word
>
> I think that before 38 years is up, none of us will be using 32-bit
> machines so, even if I was 10 years old, I wouldn't bother 'fixing'
> 32-bit machines. Even Intel's new stuff is 64 bits.

yes, even a lowly Z80 can handle 64 bit quantities.

however, the real problem is all the fileformats (like tar) and fixed
definition structures which will break. as with y2k, the problem
wasn't in the cpu per se, but in the data storage.

> Adding stuff to do 'long-long' operations on a 'long' machine will
> just add wasted CPU cycles. I think you should leave it alone.

i wish C would get a decent set of integer types. it was once enough
but now it's no longer the case.

it's silly that int is 32 bits on the alpha since int is supposed to
be whatever comes natural to your cpu, but how else would you cover
the range 8, 16, 32, 64 bits?

if you try to make int be 64 bit, then long and long must
also be 64 or more. you've got char and short left to cover the
important 8, 16 and 32 bit cases and you are left short (no pun
intended).

i'd also like to see C types with *specified* bit widths, e.g.,
int16, int32, uint8 &c. then you could write more portable code when
you really need a certain number of bits like for CRC algorithms. in
case of 36bit computing on a pdp-10 the compiler could at least
complain about lack of 32 bit support (even if it would be easy to
add) instead of silently failing.

-- 
J o h a n  K u l l s t a m
[kullstam@ne.mediaone.net]
Don't Fear the Penguin!

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Jan 07 2000 - 21:00:04 EST