> -----Original Message-----
> From: tclug-list-bounces at mn-linux.org
> [mailto:tclug-list-bounces at mn-linux.org]On Behalf Of Mike Miller
> Sent: Thursday, December 06, 2007 12:24 PM
>
> Thanks to both Florin and Elvedin for looking this up.  Very interesting.
> I had no idea that this was how we would be dealing with the 2038 problem
> on some of these machines - by replacing them with 64-bit
> machines.

I think this a seriously wrong solution.  Anyone concerned with the real
world and embedded machines, etc, finds the 32-bit architecture adequate for
data representation, qualtitatively more reliable (fewer things to go
wrong), lower cost, and much lower power.  In the great majority of storage
and processing words, the integers and double precision math leave 32 bits
per memory location unused.  That space is opportunity for error and power
consumption that does nothing for the main and critical application of such
systems and networks.  For Linux folk to make a decision that limits the use
of Linux in 32-bit architectures for critical embedded applications seems
mighty dumb to me.  Not all Linux hosts are like gaming machines where it
simply does not matter, and 64 bits makes a better game.  To me, this
indicates profound ignorance and/or oblivion by those programmers

> It is
> really a software problem, but I guess it's much more easily
> resolved in a
> 64-bit architecture so programmers are crossing their fingers and hoping
> all the 32-bit machines will be gone before 2038 gets here!  I won't be
> surprised if air traffic controllers are using in 2038 machines that they
> bought in 1997.

There's MUCH more reason to use 32-bit architectures for a majority of
embedded applications, and air traffic controllers really need thoroughly
established compatibility with the data acquisition hardware's software (and
networking) as well as their own legacy software tools.

Buggy upgrades and "neat new technology" have no place in that world.

Would you bet your life on the "newer stuff" being entirely bug free?  Would
you scrap years of test and performance, and pay megabucks for new
"qualification tests" for no more benefit than "more bits and less
reliability"?

Seems like we should be far less trusting of the "expertise" of these
"gurus" making such decisions.  Seems practical for now, but seems to need
both a statement of limitation and a workaround for critical or longer-term
applications.  Such corner cutting should not be buried and not flagged at
all.  Does the SE Linux permit such fragile algortihms?


Chuck