No, not computer dating. That's down the bus, second port on the right.
Aside from the chants that the Internet is the rood of all moral decay, the most common computer related news story these days is the Year 2000 problem. Much of what is said is, of course, hype. But there are genuine problems with computers and dates, at least with the way we've done things so far.
There are two main section that these stories fall into
Year 2000 problems (frequently abbreviated as Y2K) mostly have nothing specific to the year 2000. These are century problems. They're coming up with the year 2000 because that is the first centennial year of the computer era. Actually, some of these problems have already been seen, but in smaller numbers.
The basic root of the Y2K issue is that many dates stored in computerized records (or used within software) are not stored in their complete form, but with the year truncated to two digits. As such, there is nothing that can say whether "13" refers to 1913, 2013 or 1713. Context allows some leeway, but not a perfect solution.
Computer date roll-over is a problem less understood by the general public. It is also more difficult to make an issue of because, unlike the Y2K problem, when it hits is different for each system.
Most computer systems provide for some form of numeric representation for dates or date/time combinations. These are useful because manipulating or comparing single numbers is faster than comparing text strings. The problem is that numbers within computers (as opposed to the mathematical context) have limited bounds. If a date is represented by a 32-bit number, then the dates which can be expressed are limited by the number of combinations which can be expressed in that number.
For example, the standard Unix date/time value is the number of seconds elapsed since 1 January 1970. This is expressed in a type named time_t, which was initially a 32 bit signed value. (time_t is now defined to be at least 32 bits.)
For that scenario, you fill up the 31 available bits on 18 January 2038.
Other date standards have other roll-over points, some sooner, some later. (The date format in Java is only good for another few million years.)
The basic issue in all of these, is that we have a need to stuff a ten pound date into a five pound bag. There are two scales on which this can be viewed -- one is good news and the other bad news.
The good news view is looking at the software we are currently using. For most of the systems, you can redefine the date to be larger, rebuild the software and it will work fine. All of the users will have to buy upgrades to the new version, but that is a budgeting issue.
The bad news issue is looking at the archived data. Sure your new spreadsheet program will run fine, and record new dates with four digit years, but what about the records you have kept from the old version?
This is an even bigger problem for large systems with huge accumulated databases which are actively used. "Fixing" these requires (at the least) these type activities:
Each of these have problems, and the software development is probably the easiest. Some of the other issues:
So, yes this is a real problem for many systems, particularly systems with long term data collection. This is important for industries such as insurance, retirement planning (pensions, Social Security, etc) and property ownership tracking.
An an example, the Macintosh OS does not have a Y2K problem -- the operating system expresses years as four digit entities. Some people take that and claim that Mac users have nothing to fear. Well, it doesn't insure that Excel or Quicken doesn't put the dates in a 2 digit form before storing them. It just means that on 1 January 2000 when you look at the files, the file dates on that day's work will say 2000 instead of 1900. What the application says the dates are is a separate matter.
Back to the shiny objects
Copyright 1997, Drew Lawson.
[Last updated: 29 September 1997]