On Fri, 7 Dec 2007, John J. Trammell wrote:

> - Software should be developed with an eye on its boundary conditions, e.g.
> "I am using a 16-bit int for this field, so we had better not have more than
> about 65,000 different planes in the air...".

I have personally come to the opinion that the main use of having more 
than one type of integer is to allow the programmer to pick the wrong one. 
Using only a single word, or a partial-word, to represent an integer is an 
*optimization*, not a data type consideration.  Having the programmer pick 
what word size to hold the integer in is a fraught with potiential bugs 
and portability problems as having the programmer pick what register to 
hold the integer in.

This isn't just about the Y2038 bug- although that's an example.  It's 
about the whole C99 long long fiasco.  It's about the 32-bit limit that 
Java's starting to hit hard- Java used int's to index arrays, so you can't 
have an array with more than 2^31 elements.  Etc.

Brian