2024 / 07 / 14 - Time and Numbers

The Kevux projects are predominately specification driven programming, also called standards driven programming. Most of the standards are focused around logic and a specific goal. Some of the standards, however, are focused on math and its application. Today, I have decided to discuss some of the mathematical based, or number based, standards. I planned on a more extensive article but the events of Yesterday, July 13, 2024, are historic and troubling for the United States of America. I have found that rather distracting and I am opting for a shorter article.

Time, a Unit

The UNIX Timestamp is the predominant time based standard on computers this day. This time measurement system is based around the number of seconds have passed since January 1, 1970 at 00:00:00 UTC. This date is also known as UNIX Epoch. This works reasonably well because the year can simply be calculated. The existence of leap years, leap seconds, and other alterations complicate the use of this measurement of time. The calculations against this number for past, present, and future is relatively easy.

The original UNIX Timestamp is a signed 32-bit integer. Mathematically speaking, this can only represent about 68 years. This problem is known as the year 2038 bug. Increasing the integer from 32-bit to 64-bit prevents this particular problem by making the max year problem, such as the year 2038 bug, less of a concern.

I see the UNIX Timestamp as a good idea surrounded by some minor, but critical, problems. The problems that I am concerned with stem from two parts.

  1. Separation of concerns problem regarding the byte size.
  2. Directly associating, or hard-coding, a year into the number of seconds.

The UNIX Timestamp standard was written during a time when the hardware constraints were a critical concern. Forcing the 32-bit signed behavior back then made sense. Today, one should instead focus on user interaction and dynamic range support. If a particular machine or software program does not support say, a 128-bit timestamp, then it can simple communicate that the number is too large. Any timestamp value that would fit in a 64-bit or 32-bit integer would still be supported in a 128-bit timestamp.

The biggest problem is not the bit length of the integer. The biggest problem is that the year, 1970, is fixed. Make the year a separate number. This number, like the seconds value, can also be of an unfixed size. Twelve bits can be safely used to represent a complete number for the year 2024 and for a good number of years beyond that. With a system that is unbiased in the bit length, this number could also be 16-bit, 32-bit, or anything else. An unfixed year has the advantage of allowing for a unit of time to be used for theoretically any point in time. A computer system could be programmed for the calendar year of say, planet Pluto. A computer system could be programmed for the calendar year of say thousands of years into the B.C. calendar. This system could then be used to compare against the current clock system without requiring a new standard.

The UNIX Timestamp Alternatives: Time and EpochTime

I wrote two new units of time called specification of Time and EpochTime.

This standard is largely based on the UNIX Timestamp but instead of using signed integers, it uses unsigned integers. A year can be specified so the use of a negative is rather pointless. This standard does mention 64-bit to make it more directly compatible with the currently UNIX Timestamp ranges. The standard also asserts that it allows for larger bits as needed. This does not, however, specifically declare how hardware must implement this. For example, a Time of 2000:-1 could instead be represented as 1999:31536000. This could also be stored as a string literal.

A system hardware clock could utilize a rotating clock to support this unit of time called Time. After 32 days into the year, then the year part of the hardware clock could be incremented by 1 and then 32 days would be set in the seconds. The 32 day wait is done to avoid any potentially problems with a leap year, a leap second, or any such complications.

One downside of this approach is that two comparison may be needed when comparing two dates. This may also complicate the database structure, such as requiring two columns or splitting of the value. This can be solved through organization and filtering first by the year and then calculate the timestamp. On the flipside, this downside could be turned into an advantage by using this to optimize the dates via the year.

Kevin Day