Jump to content

Talk:Year 2038 problem/Archive 2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2

zip?!

I thought zip used dos dates/times and pkwares spec seems to say the same "The date and time are encoded in standard MS-DOS format.", I don't rememeber the details of this format offhand but i don't think its rollover comes in 2038 —Preceding unsigned comment added by Plugwash (talkcontribs)

MSDOS format is a bit-encoding that simply packs the bits holding the year-since-1980, month, day, hour, minute, second (in two second resolution) in to 2x16bits. AFAIK, its rollover date is 2108-01-01.
I've removed the bit about Zip. In fact: The use of 32-bit time_t has also been encoded into file formats, which means it can live on for a long time beyond the life of the machines involved. Are there many examples of this? It appears to me that file formats change much more rapidly than operating systems; isn't this sentence just stating the obvious? squell 20:36, 9 February 2007 (UTC)

Not true by stephen hawkin

No, in fact you've got it backwards. Data and data formats tend to live much longer than OSes. As to examples, how about Tar (file format) - it's been around for over 30 years, and was standardized almost 20 years ago (in POSIX). The file's last-modification time is stored as a 12-digit octal number in ASCII, so it's a 36-bit number, not 32, but it's still a 1970-01-01 00:00:00 epoch. RossPatterson (talk) 04:15, 18 February 2008 (UTC)

Some file formats (and filesystem formats which can suffer from similar problems since at the end of the day both just represent ways of structuring information within a big block of storage) stick arround for ages. Some in wide general purpose use (zip, png, gif,jpeg, mp3, fat16........) others only in more specialist uses or even only within one company. It looks like ext2 at least in it's oringinal form stores a fixed size 32 bit timestamp ( http://www.nongnu.org/ext2-doc/ext2.html#I-ATIME) Plugwash (talk) 20:44, 25 January 2008 (UTC)

Ok strictly in ext2 the fields are defined as u32 but I don't know if 64 bit linux versions consistantly treat them as such. Plugwash (talk) 20:51, 25 January 2008 (UTC)

I think the question remains: while it's entirely plausible, can somebody provide an example of a file format which encodes a 32-bit time_t?

  • Many filesystems do, but filesystems are almost completely transparent to the user, and that doesn't sound like what the claim was.
  • GZIP (mentioned above) does, but "MTIME = 0 means no time stamp is available", so there's no harm in omitting it
  • PNG uses 7 fields for the timestamp, including 2 bytes for the year, so it's fine
  • GIF and JPEG do not appear to store timestamps
  • MP3 doesn't seem to store timestamps, ID3v1 tags use 4-byte years

I'm sure such a file format exists, but what? —Preceding unsigned comment added by 64.81.170.62 (talk) 19:49, 8 May 2008 (UTC)

It looks like cpio when generating binary format headers uses a 32 bit unix style timestamp, having trouble finding out if it's supposed to be signed or unsigned though. Plugwash (talk) 23:37, 15 July 2009 (UTC)

Example Changed?

I could have sworn that the example used to be animated, stepping up the seconds to the fateful moment in question. Has it changed? Have I lost my mind? Jouster  (whisper) 10:07, 11 March 2007 (UTC)

I found the diff, and reverted the change; I've left a message on User:Ksd5's talk page. Jouster  (whisper) 18:38, 11 March 2007 (UTC)

It seems like the example doesn't work. The binary representation continuously goes from 1000 ... 0000 to 1000 ... 0111 and then switches to from 1111 ... 000 to 0111 ... 1111 and then back!? The decimal bit is showing positives and negatives, one after the other. —Preceding unsigned comment added by Ywarnier (talkcontribs) 10:13, 29 December 2008 (UTC)

Windows Vista

I found this oddly amusing. I changed Vista's clock to 6/28/2038 and restarted my computer. Immediately, my computer alerted me saying that Windows Media Player wouldn't work, and later Java SE binary shut off. Most of my programs seemed to run slower also. Funny thing is, Vista's date can be set up to 2099, which I did not try. Could any of this be tested by someone else, and does it seem to be a good addition to the article (near where an example of Windows XP is given)? Thanks! ---Signed By KoЯnfan71 (User PageMy Talk) 20:25, 28 June 2007 (UTC)

Alas, it's not appropriate for the article; see WP:OR. That said, if you want to report it to a WP:RS, and they write an article on it, we can then reference that article. Jouster  (whisper) 20:32, 28 June 2007 (UTC)
Alright. Thanks for saying so. I don't want to go through the trouble of doing anything for this, it's just something I did in spare time that took 10 minutes, so I don't care. ---Signed By KoЯnfan71 (User PageMy Talk) 23:49, 1 July 2007 (UTC)

Year 2038 Date

Websites I have checked, including this one, give the crucial 2038 date as Tuesday, January 19, 2038, 03:14:07 UTC. The number of seconds from 1 Jan 1970, 12:00:00 AM GMT, is 2,147,483,647, the limit of 31-bit counter. Assuming Julian years, this gives the date above. However, our calendar is Gregorian. Using the average length of the Gregorian year, 365.2425 days, I get the same date, but the time is 15:28:31 UTC. Furthermore, as one website points out, this count ignores the possibility of leap seconds being added.

Don Etz, (e-mail removed) 67.72.98.84 15:40, 12 July 2007 (UTC)

Unpredictable leap seconds notwithsanding; since neither the Gregorian nor Julian calendar make any mention of the number of seconds in a day - let alone specify it differently - and since both specify exactly the same days in every year between 1970 and 2038 inclusive, I fail to see how the same timestamp interpreted under the two calendars would give times that differ by twelve hours, fourteen minutes, and twenty-four seconds. 202.27.216.35 05:02, 4 November 2007 (UTC) —Preceding unsigned comment added by JackSchmidt (talkcontribs)
The origin should be given unambiguously, as 1970-01-01 00:00:00 UTC. There can be doubt as to when 12:00:00 AM is. There should be no need to check web sites; the exact calculation is easy enough. There's no problem with 03:14:07; the problem occurs after that second ends. Leap Seconds are ignored in the count, which is better described as GMT or UT.
Javascript new Date(Math.pow(2, 31)*1000).toUTCString() gives the rollover time : Tue, 19 Jan 2038 03:14:08 UTC in IE6, Tue, 19 Jan 2038 03:14:08 GMT in Opera and FireFox.
It may be worth emphasising that those events are on the previous local date in North America.
At the start of the page, "since January 1, 1970" should be "from 1970-01-01 00:00:00 GMT". Eight hours since Jan 1 is breakfast-time on Jan 2.
82.163.24.100 20:14, 1 September 2007 (UTC)
No original research; if you can find a reliable source that says that that's the time (or if you can convince a reliable source that lists the one we're currently using that they're wrong!), go for it! Jouster  (whisper) 20:38, 12 July 2007 (UTC)
By the way, that last 0025 as in 365.2425 is not gradually applied, but only occurs as a chunk of 24 hrs once every 400 years, which already occured in 2000 so it will not be used again between now and 2038. The same is true of the 24 in 2425, it only affects centuries, which do not have a leap year (other than once every 400 years). So in computing the time for roll over you need to simply count the number of leap years, 18 (the number of centuries is 1 and the number of years divisible by 400 is 1) and multiply by 24 hours in your computation. I was saying that using one char (8 bits) multiplies 256 times 30 years. I see now that is 7680 years, not 750 years. The leap seconds do not affect what time our clocks register when roll over occurs, it only affects how soon that will occur. 199.125.109.118 16:57, 19 July 2007 (UTC)
I've added back the time on the epoch date and cited it to a definition page from the open group, you don't get much more authoritive than that. Plugwash (talk) —Preceding comment was added at 23:15, 22 January 2008 (UTC)

On an eMac ...

"On an eMac, January 2038 displays 38 days. You cannot go forward in months anymore. You can go forward to January 2039, though, and go a lot further. In January 2039, the faded numbers belonging to the previous month instead say "Na". This presumably stands for "not applicable", but this is normally abbreviated to "N/A"." Two things. First, does this violate no original research? There's no citation, and it 'presumes' things. Also, "Na" is most likely NaN, cut off at two characters (as one likely would to get '01' instead of just '1'). Trevor Bekolay 18:53, 24 July 2007 (UTC)

I agree. I've reverted the addition. ~a (usertalkcontribs) 19:19, 24 July 2007 (UTC)


The wrap-around to negative can crash the app.

A function like ctime() that has tables and such internally can crash if passed a time_t, either too far in the future, or with a negative value. I only built with gcc 4.1.2 and MS Visual Studio Express 2005. The GNU compiler collection 32 bit compiler still uses a 32 bit time_t by default. Microsoft's time_t crashes ctime if negative or about a thousand years in the future. Most time formatting functions have tables built in for all the goofiness in the calendar, and when you index outside a table, you generally have a crash.

So, even though your signed 64 bit scalar value can represent times in the distant future and distant past, there is no guarantee that your application will display such times. Maybe it's an issue for a an astronomical application that predicts stellar positions far into the past and future, and wants to display a date/time that people can (sort of) relate to, along with 'Tuesday'.

Anyway, heads up: Even though you can reserve and store that 64 bit value, your application won't necessarily correctly display calendar times billions of years from now, when the sun is just a cold, dark cinder and the moon and Earth are part of it, or billions of years in the past, before the Earth even formed. Not that a Gregorian calendar would mean anything in either context.

I added code to the article to illustrate the problem and test the runtime library. Hope it helps. —Preceding unsigned comment added by 70.7.17.202 (talk) 20:08, 15 December 2007 (UTC)

Somebody (not me) removed your addition. I agree with the removal because your program is considered original thought. For inclusion on Wikipedia, you're supposed to include references to reliable sources. ~a (usertalkcontribs) 05:25, 19 December 2007 (UTC)

hmmmm..1999

Aparently, many sources relate 1999 to the year of the worlds end. Now all of us realize that 1999 already happend and we're all still here. But this discovery makes me think; maybe since things will be resetting to 1901, that the end of the world in fact could still be 1999, so that means, 98 years after 2038 (year 2136 = 1999) can potentially be the end ? —Preceding unsigned comment added by Tofu-miso (talkcontribs) 23:10, 13 January 2008 (UTC)

Example in practice

The Perl CGI::Cookie module, when using 32 bit dates, is readily subsceptible to this issue today in immediate practical application.

When setting cookies for distant dates in the future, it is quite common to specify some long distance arbitrary time. This is not, contrary to popular stupid opinion, just a spammer or advertiser thing. For instance, you might want to be able to display to a user how many unique visitors have viewed their profile. Raw hits will count refreshes, and IP addresses are misleading with dynamic IPs and shared IPs. The most practical definition of 'unique visitor' is therefore to store a random string cookie and this will at least give you a closer approximation and a more trackable number.

Since coders would like to rely on the internal mechanisms which allow you to set things in relative formats such as '+100y' for instance, which the Perl CGI::Cookie and CGI.pm modules claim to allow.

However, if you specify such a date too far in the future it will immediately fall prey to the Y2038 problem, and, because HTTP Cookies with an expiry set in the past will *immediately* expire and thus be deleted from the user's cookies, said cookie will effectively never be set.

Example in code:

   my %cookies = fetch CGI::Cookie;
   if (defined $cookies{visitor}) {
       $ENV{UNIQUE_VISITOR} = $cookies{visitor};
   }
   else {
       my $md5 = new Digest::MD5;
       $md5->add($$, time, $ENV{REMOTE_ADDR}, $ENV{REMOTE_HOST}, $ENV{REMOTE_PORT});
       my $unique_id = $md5->hexdigest;
       my $vcookie = CGI::Cookie->new(-name => 'visitor',
                                      -value => $unique_id,
                                      -expires => '+30y',
                                      -path => '/',
                                      -domain => ".$dom");
       $r->headers_out->add('Set-Cookie' => $vcookie);
       $ENV{UNIQUE_VISITOR} = $unique_id
   }
   $r->subprocess_env('UNIQUE_VISITOR', $ENV{UNIQUE_VISITOR});

The results:

200 OK
Connection: close
Date: Sun, 17 Feb 2008 23:41:42 GMT
Server: Apache/2.0.55 (Unix) PHP/5.0.5 mod_jk/1.2.23 mod_perl/2.0.2 Perl/v5.8.7
Content-Type: text/html; charset=ISO-8859-1
Client-Date: Sun, 17 Feb 2008 23:41:42 GMT
Client-Peer: 64.151.93.244:80
Client-Response-Num: 1
Set-Cookie: visitor=aaceff88e393bf4agrehf13e810a7a12a; \
  domain=.gothic-classifieds.com; path=/; expires=Sat, 04-Jan-1902 17:13:26 GMT
Set-Cookie: session=c8701b98426greg9a5d31117e5f01ae75; \
  domain=.gothic-classifieds.com; path=/; expires=Mon, 16-Feb-2009 23:41:42 GMT


Dropping it to '+29y', slightly before the Y2038 problem, yields instead:

Set-Cookie: visitor=40c0b19ab5greh873155f9af5c29df3bc; \
  domain=.gothic-classifieds.com; path=/; expires=Mon, 09-Feb-2037 23:42:45 GMT

This shows that this is not just a 'thing to think about in the future' but something that directly affects code right now. —Preceding unsigned comment added by 208.54.15.180 (talk) 00:20, 18 February 2008 (UTC)

This is a minor variation of the problem AOL already had that is mentioned in the article. Jon (talk) 18:17, 16 July 2008 (UTC)

Reworked intro

I significantly updated the intro paragraph to make a few things more clear:

  • It's the combination of storing time as a signed 32-bit integer AND interpreting this number as the number of seconds since 00:00:00 Jan 1, 1970
  • Every system that deals with time this way is affected, regardless of whether the system is "unix-like"
  • Removed some of the more technical details, like C and time_t, from the intro. These are discussed in more detail later.
  • Removed the more obscure "also known as" Y2K+38 and Y2.038K

I think these changes will make the introduction more understandable to first-time readers. netjeff (talk) 20:32, 4 October 2008 (UTC)

Don't see how this is a problem

Why don't they just add another few lines of binary to the date code? Should hold up for another few years that way, as the integer should last longer without "Reverting". —Preceding unsigned comment added by Kazturkey (talkcontribs) 10:51, 8 October 2008 (UTC)

More bits is, indeed, the solution, and is what is being done. (Some systems (e.g. Mac OS X) have supported 64 bit time for years.) The rub is that all of the relevant software needs to know to use the new system calls that support this. You can't just add bits to existing system calls, because the structures that are used to pass the data are of fixed geometries; and you can't magically make existing software use a wider structure. Each piece of software must be modified.

(Something that does puzzle me is that about everyone seems to think that it can't be done on a 32 bit CPU, or in a 32 bit OS—as if that ever stopped, for example, 8 bit computers from routinely manipulating 16 bit integers and 40 bit floats.)
überRegenbogen (talk) 12:48, 8 July 2009 (UTC)

I'm confused about the type of time_t....

On i686 linux-2.6.xx, glibc 2.9 time_t is a (signed) long:

/* <bits/typesizes.h> */
#define __TIME_T_TYPE		__SLONGWORD_TYPE

/* <bits/types.h> */
#define __SLONGWORD_TYPE	long int.
/* ... */
__STD_TYPE __TIME_T_TYPE __time_t;	/* Seconds since the Epoch.  */

/* <time.h> */
typedef __time_t time_t;

But this program:

#include <stdio.h>
#include <time.h>

int
main(int argc, char** argv)
{
	time_t ticks = 2147483647;
	ticks += 2147483647;
	ticks += 1;
	printf ("SIZEOF time_t: %d bits\nMAX    time_t: %u\n", sizeof(ticks)*8, ticks);
	return 0;
}

...compiled with:

gcc --std=c99 -o foo foo.c

...or:

gcc --std=c89 -o foo foo.c

...outputs the following:

SIZEOF time_t: 32 bits
MAX    time_t: 4294967295

Shouldn't ticks be overflowing? Why it is getting promoted to an unsigned long? (I specifically didn't initialize it 4294967295U to make sure gcc didn't do anything sneaky like that).

I'm not sure why, but it appears that it's the "2106 bug" for current 32-bit linux implementations using the GNU toolchain? 24.243.3.27 (talk) 05:51, 24 November 2008 (UTC)

printf("%u\n", -1) will produce the same result. ticks has the value −1 at the time of the printf, but because you used %u instead of %d it was printed as an unsigned integer. -- BenRG (talk) 14:27, 24 November 2008 (UTC)
Doh! Thanks for spotting that. For some reason my brain put the %u specifier (DWIM!). Sorry for the noise. 24.243.3.27 (talk) 17:37, 25 November 2008 (UTC)

openssl example

Here is an example demonstrating something or other with openssl. It used to be part of the main article. It was moved here for posterity, as it is unclear to anybody unfamiliar with openssl exactly what this example is trying to demonstrate:

$ date
Su 6. Jul 00:32:27 CEST 2008 
$ openssl req -x509 -in server.csr -key server.key -out server.crt -days 10789 && openssl x509 -in server.crt -text | grep After
Not After : Jan 18 22:32:32 2038 GMT
$ openssl req -x509 -in server.csr -key server.key -out server.crt -days 10790 && openssl x509 -in server.crt -text | grep After
Not After : Dec 14 16:04:21 1901 GMT (32-Bit System)
$ openssl req -x509 -in server.csr -key server.key -out server.crt -days 2918831 && openssl x509 -in server.crt -text | grep After
Not After : Dec 31 22:41:18 9999 GMT (64-Bit System)

--24.190.217.35 (talk) 23:03, 16 December 2008 (UTC)

clock 'reset' in the example

I think a more appropriate word for what happens to Unix time in 2038 is 'roll over'. 'Reset' is generally taken to mean 'set to all zeros', which is clearly not what is happening. My crude calculation indicates that the current system, if left unchanged, will reset to zero 68 years after 2038, i.e. in 2106. But this would happen only if the world suffered with a disfunctional Unix time for 68 years without fixing it.

72.19.177.68 (talk) 17:40, 14 January 2009 (UTC)

Software engineering disasters???

Why is this in the 'Software engineering disasters' category? It's not a disaster its a known limitation. Same as 32 bit int can only hold a certain range of numbers, hardly a disaster. I would like to see this removed from this category. 203.134.124.36 (talk) 03:09, 11 February 2009 (UTC)

Most disasters are precipitated by bad assumptions. Of course we haven't got that close to Y2K38 but it looks to me like it's shaping up to be a repeat of Y2K with a meh attitude taken until the last minuite and then a panic to fix things in the last few years. Plugwash (talk) 23:23, 15 July 2009 (UTC)

TCP timestamp issue?

Will TCP also have a rollover problem in 2038 since it uses 4-byte times? Since we might not have migrated to IPv6 by then, won't this be another affected software component? -- Autopilot (talk) 23:18, 21 February 2009 (UTC)

No. TCP Timestamps do not use a specifified baseline (or "epoch") clock value, and the spec (RFC 11231323) has implementation requirements that directly address wraparound. RossPatterson (talk) 17:56, 22 February 2009 (UTC)
Ok. I don't see the discussion of wraparound in RFC 1123, however, other than the general robustness principle. Did you mean PAWS in RFC 1323? -- Autopilot (talk) 18:30, 22 February 2009 (UTC)
Sorry, typo on my part - yes, I meant 1323. The timestamps are indeed 32 bits, but as the RFC says, they are "obtained from a (virtual) clock that we call the "timestamp clock". Its values must be at least approximately proportional to real time,", rather than being a count of seconds since 1970-01-01T00:00:00. By way of illustration, as has been observed elsewhere, systems based on BSD Unix use a count of 1/2-second intervals since boot time, which meets the proportionality requirement and yet is unlikely to wrap - few systems run for 136 years between boots. RossPatterson (talk) 19:51, 22 February 2009 (UTC)

XKCD Reference

I've seen what happens to an article anytime the subject is mentioned in XKCD, and just wanted to let anyone working on this know that today's comic did just that for the Year 2038 problem. I'm not saying one way or the other whether or not it should be mentioned, just wanted to give a heads up in hopes that it will curtail the inevitable discussion that has taken place in many other Wikipedia articles. Scyclical (talk) 04:41, 8 July 2009 (UTC)

Removed xkcd reference, in accordance with other articles that have been vandalized by xkcd fans. See the discussion there instead, and remember, kids, there are other ways to display your fanboyism. Get a tattoo of a heart with the text "Randall, get our of my heart" or something. —Preceding unsigned comment added by 81.216.131.26 (talk) 17:31, 14 July 2009 (UTC)

IBM vague report

Yes, the article linked to at IBM says "Some sources also indicate" that ext4 fixes things until 2514, but they do not say who those sources are, and my reading of the source code gives me no indication of such a feature. Ext4 adds an extra 32 bits to inode timestamp fields to record the number of nanoseconds between seconds of the normal time stamps, but doesn't change the interpretation of the second count. Could someone else also read the source code and confirm my reading? Then please remove the reference to the misleading IBM article. —Preceding unsigned comment added by Thyrsus (talkcontribs) 02:05, 9 July 2009 (UTC)

mortgages

In 2008 slashdot ran a story "Y2K38 Watch Starts Saturday" http://it.slashdot.org/article.pl?sid=08/01/15/1928213 which speculated that some mortgage related software would have problems once it got to 30 years before the 2038 bug hits. Did anything come of this (a quick google doesn't seem to be turning anything up). Plugwash (talk) 23:06, 15 July 2009 (UTC)

40 year mortgages are not rare. — Arthur Rubin (talk) 00:05, 16 July 2009 (UTC)

Other software problems in 2038?

Is it possible that the hypothetical impact of 99942 Apophis may cause some computer software to fail before or in the year 2038? - Zelaron (talk) 05:59, 21 January 2010 (UTC)

lol

"Using a (signed) 64-bit value introduces a new wraparound date in about 290 billion years, on Sunday, December 4, 292,277,026,596. However, this problem is not widely regarded as a pressing issue." - I think that sentence is the funniest thing I've read on wikipedia. Kudos. (Granted, that says something about the type of prose most articles have, but still.) 165.123.166.240 03:01, 18 February 2007 (UTC)

that remark really should be removed, as it is wholly unverifiable. Niffweed17, Destroyer of Chickens 02:21, 24 February 2007 (UTC)
It's one of the very few phrases I've seen on wikipedia that's a harmless attempt at humor. On the other hand, it's been a horrible point of disagreement. Maybe it's not worth it. ~a (usertalkcontribs) 03:39, 24 February 2007 (UTC)
What exactly is unverifiable about it? —Pengo 07:57, 24 February 2007 (UTC)
It's impossible to prove a negative. You could say anything isn't a pressing issue, but it doesn't mean it's worth saying. I came here to ask about taking it out, but you convinced me. Superm401 - Talk 09:42, 26 February 2007 (UTC)
I think we should take it out - even if it is humorous, it's not in an encyclopedic tone. -- stillnotelf is invisible 23:13, 26 February 2007 (UTC)
I strongly disagree. This is a factual statement that can be verified (at least the date). That some date that is several times the length of time since the Big Bang in the future is not a pressing problem, I suppose this is also analogous to the Y10K problem. If you insist on a compromise here, I would suggest wording somewhat similar to how the Y10K problem is also presented. --Robert Horning 21:14, 8 March 2007 (UTC)
I have introduced an appropriate source to verify the statement. BeL1EveR 21:57, 28 May 2007 (UTC)
There were disputes as to the veracity of the claim that the solar system will eventually be destroyed by the sun? – Mipadi 22:11, 28 May 2007 (UTC)
The link is more a means of putting the timescale into perspective, thus lending credibility to the rest of the sentence.BeL1EveR 22:46, 28 May 2007 (UTC)
"Credibility to the rest of the sentence?" It's a piece of humor. I don't think there's such a need to treat it so seriously. – Mipadi 01:09, 29 May 2007 (UTC)
Perhaps the remnants of humanity in Andromeda will still have earth wall clocks?? Anyone have a timeframe on the Big Crunch? And assuming time is linear, of course ;) --Nonnb (talk) 13:57, 10 September 2010 (UTC)

Solutions

I know this is original research, but isn't a simpler solution to change the epoch date every 30 years? After all when you fill out a check book you only use the last two digits because the first two are already filled in for you. So just change the epoch to 2000, and in thirty years to 2030 and so on. Add a one byte count of what epoch you are using and that gets you 750 years down the road. 199.125.109.11 22:30, 13 June 2007 (UTC)

I also know this is original research. But, where do you get 750 years from? You seem to be proposing a much more complicated version of a 5 byte number to store dates. A 5 byte number to store dates would come to 2 8*5-1 seconds. That seems like it could be much more than 750 years. But, if you're using a 5 byte number to store dates, you probably would just end up using an 8 byte number to store dates (which is what they are doing). ~a (usertalkcontribs) 22:40, 13 June 2007 (UTC)
No, that doesn't work. Historical data is the problem -- if you have a file that you edited in 1999, and you move the epoch from 1970 to 2000, then your file's timestamp now says 2029, which brokenly wrong. 64.81.73.35 (talk) 20:37, 9 January 2010 (UTC)
I don't see how your proposal of adding an "epoch counter" is any simpler than simply expanding the field to 64-bit. Plugwash (talk) 14:54, 13 February 2010 (UTC)
Agree with above; the problem is historical data. By changing the epoch, previously saved data would be misrepresented. This is worse then Y2K, as this is an OS problem instead of a coding problem. --Gamerk2 (talk) 16:06, 2 March 2010 (UTC)

1901?

Guest edit here, should it really be 1901 at the end of the first paragraph, or does that mean what it will go to after?

Yeah, 1901 refers to the date it will rollover to. I agree that the wording can be a bit confusing, so I added a bit to clarify. Trevor Bekolay 16:13, 8 August 2007 (UTC)
shouldn't it be 1970 - as it rolls over and the first year is 1970... ?
no, because it rolls over to a number that is minus 68 years from the epoc --UltraMagnus (talk) 10:44, 23 September 2009 (UTC)

In Fairness

Does everyone believe that this will happen? It is the same with the "Y2K problem" and the "10000 problem"...it will be fine!!! —Preceding unsigned comment added by 91.142.229.87 (talk) 19:52, 11 November 2007 (UTC)

The article states "Most operating systems for 64-bit architectures already use 64-bit integers in their time_t. The move to these architectures is already underway and many expect it to be complete before 2038." So no, I don't think anyone thinks this will cause widespread problems in 2038. For 32-bit architectures though, it's not really an issue of 'will it happen' but 'what problems will occur as a result of it.' Trevor Talk 19:14, 12 November 2007 (UTC)
Also how many apps are out there that don't use time_t but instead use int (which i belive is still 32 bit on some 64 bit systems) or worse int32_t? how many file formats store timestamps as a 32 bit integer (admittedly apps using those file formats will be able to buy a bit of time by redefining the timestamps as unsigned)?
I suspect that like with Y2K there will be a mad rush to fix the things that are still broken in the few years before 2038 and like with Y2K there will be very few problems left by the rollover date. Plugwash (talk) 14:14, 17 December 2007 (UTC)

I am no scientist but i find it humourous that from what i have read computer programers are being paid to "fix" the computers before the year 2000 and they are the ones saying it would happen right? "he clouds are going to fall out of the sky soon and crush the earth" "omg noooo!" "but for a small fee i can fix the sky above your house so that it won't happen to you" everybody gets "cloud fixed" and when the day he said it would happen on is here everybody thanks him for fixing the issue. 69.220.1.137 (talk) 20:14, 13 July 2008 (UTC)

Different issue; Y2k was, at worse, a coding problem that software enginners knew about for decades. The 2038 problem is an OS problem, which affects a great many programs. Moving to 64-bits native solves the OS problem, but any programs in use that use the 32-bit time value would still need to be patched. If the problem was 2012 instead of 2038, this would be far bigger then Y2k, simply because of how much of our backend uses Unix/Linux in particular, but we'll be well into the 64-bit [and possibly beyond] relm by 2038, and most critical software will hopefully have been updated by that time. —Preceding unsigned comment added by Gamerk2 (talkcontribs) 16:12, 2 March 2010 (UTC)

But seriusly, why people don't use unsigned time? I think that would really cause this problem to be for computers that may be too bad for use at that time. —Preceding unsigned comment added by RSXLV (talkcontribs) 14:10, 30 March 2010 (UTC)

Actually, I think that originally they did! The only problem was that, at the time, there was no unsigned type. There certainly was no time_t. The addition of unsigned to the C language did not result in correcting the include files. The C standard defined time_t, but did not require a specific type. (I wish I had my ANSI C spec handy, to see if it was even required to be arithmetic. I don't think it was.) In any case, I am pretty sure time_t being signed is historical accident and convention, not design. David Garfield (talk) 21:27, 23 August 2010 (UTC)

Cleanup

I improved the grammar/style of this page. Lovek323 (talk) 02:57, 13 February 2010 (UTC)

Just wondering…

Thanks, Lovek323, for your cleanup of the tone of the article.

It seems to me that an advanced OS like Mac OS X are providing a layer of abstraction to dates. I just did a =NOW() in Excel and added 50 × 365.25 days to it. The resulting value returned March 26, 2060. Note that Excel doesn’t have a date format showing the day of the week (like “Friday, March 26, 2060”), so it only had to handle leap years; indeed, such a simple calendar function could be built into the application. However, if I simply plug the date “3‑26‑2060” into FileMaker Pro, it properly knows that it is a Friday. However, I also note that I cannot go to System Preferences>Date & Time and set a date greater than 12/31/2037.

The above leads me to strongly suspect that Apple’s OS X might rely only upon Unix’s time services for Finder activities such as dating files and behind-the-scenes housekeeping (such as periodically rebuilding the ‘whatis’ database). It clearly appears that Apple is using its own OS X‑based calendaring system for all properly written applications (those written in Carbon or Cocoa and which are compiled using compilers that call on Apple APIs) to call upon. This makes perfect sense. The Mac has long been used in astronomical mapping, where star and planet positions can be calculated and plotted for dates that are many centuries in the future. I’m confident that FileMaker and all these obscure astronomy applications intended for amateurs don’t have to make their own calendar work-arounds. It’s clear that Apple likely has a calendar API that works for dates many centuries from now.

This article could clearly benefit from the contributions from a programming expert. Greg L (talk) 00:41, 30 March 2010 (UTC)

Why didn't they use a Struct

[code]{ long long year; short int month; short int day; short int hour short int sec; short int millisec; short int microsec; short int picosec; short int ridiculoussec; }[/code] nabs. --79.130.4.132 (talk) 11:25, 3 August 2010 (UTC)

To save space and calculation time, I believe. That would make time_t a non_arithmetic type (at least in C; it could be overloaded for C++, at the expense of considerable time cost). This doesn't have much to do with improving the article, but it is a reasonable question.
Also, there is (or was, in the initial standard) no guarantee that "short int" be longer than 1 byte, making "millisec" inoperative. — Arthur Rubin (talk) 15:17, 3 August 2010 (UTC)
As Arthur Rubin says. For example, to calculate the "distance" between two dates you can make a simple substraction: date1 - date2 = distance. --Enric Naval (talk) 15:25, 3 August 2010 (UTC)
They did use a struct. specifically, struct tm. It is the decoded form. It is too painful to get the OS to use a non-linear time for its internal time. If you look at what is defined in ANSI C, a difftime exists to allow the subtraction to be performed on an otherwise unspecified type. POSIX stipulated an arithmetic type because they standardize current practice. Personally, before POSIX screwed it up and C99 adopted that, I think the best definition for time_t would be something like: typedef struct { char _time_t_data[SOME_VALUE]; } time_t; Finally, short int IS guaranteed (in ANSI C) to be at least 16 bits, and pretty much always was at least that size. David Garfield (talk) 21:53, 23 August 2010 (UTC)

False

I tried setting my computer's date to that date and time. But nothing happened.--125.25.238.245 (talk) 22:31, 21 September 2010 (UTC)

Just because your computer didn't do a thing doesn't mean it's false. Windows (XP at least) can still work fine. If an application needs to use the system date, it'll crash or glitch. It won't have immediate effects. 193.136.62.42 (talk) 15:36, 12 October 2010 (UTC)

The picture is slightly ambiguous

It's not too much of a problem, but the article's picture has two lines labeled "Date:". One can't immediately recognize that the first line refers to the system time, (whereas the second is the real time), until the rollback happens. I was a bit confused myself, at first. So many letters and numbers, and animated; I was trying to pay attention to every detail and couldn't quickly distinguish both lines without getting confused. I eventually understood everything, of course. The last two lines should be labeled distinctly. Also, it'd look better if it said "Time:" rather than "Date:", seeing as those lines specify both the time and date. I'd change the gif myself, but I'm not sure if the software I have will allow me to do it easily (it's been a long while since I last edited a gif). Can anyone fix those aspects? 193.136.62.42 (talk) 15:36, 12 October 2010 (UTC)

What is Futureproof?

How long should be considered futureproof? Obviously the guys writing code in the 70s thought that 30 years was long enough. Then 30 years later, the code was still in use and y2k came. So should a system be designed to at least work for the next 100 years, 1000 or even billions like is true with the 64 bit timestamp? It's hard to say. I saw one replacement for 32 bit unix time that had a date resolution of microseconds and a time limit of 300 years. That seems to soon. I think a good minimum is at least a billion years. Today's computers are fast enough that adding a few extra bits to the timestamp now seems like a good idea if it can avoid catastrophe when the all of the original system designers are dead and no one remembers how to code anymore.--RaptorHunter (talk) 06:35, 22 November 2010 (UTC)

Will mac be effected too?

Since OSX is based on Unix, will that be effected? —Preceding unsigned comment added by 88.104.96.218 (talk) 18:24, 16 April 2011 (UTC)

Hopefully, by 2038 all OSX systems will be on 64-bit. This is going to be more a problem for embedded systems.--RaptorHunter (talk) 21:45, 9 May 2011 (UTC)

S2G ???

Just curious if there is any meaning such as "Swear to God" or perhaps a reference to a nuclear reactor for the acronym S2G? reply via lookmomnohands (talk) --Lookmomnohands (talk) 06:38, 6 May 2011 (UTC)

I searched and found nothing so I removed it.--RaptorHunter (talk) 04:23, 14 June 2011 (UTC)

please improve your accuracy

when discussing what will happen, in 2038 ... the term "undefined behavior" comes to my mind, as you do not know what measures the systems designers took to compensate for this limitation. It is possible you will end up with a simple rollover, or a overflow condition ... it is also possible that the vendor has already accounted for this maybe by adding another 64 bit value as an offset.

additionally, most unix systems have been using unsigned integers for this for some time. —Preceding unsigned comment added by 71.197.193.162 (talk) 20:35, 20 May 2011 (UTC)

If you are claiming that most embedded systems (the unix systems that are likely to fail in 2038) use unsigned integers, then that is great news. Please cite a source.--RaptorHunter (talk) 04:16, 14 June 2011 (UTC)

daft problems in article

Just the two significant ones, the first of which I may try to fix, the second might require a bit more work and restructuring...

1. Talking about the mechanism of the rollover... it's caused by "integer overflow" (in a binary system), but also when the system "runs out of decimal digits"?? An integer doesn't HAVE any decimals in it, and it's not using base-10 in the first place anyway... It just flips the MSB of the binary integer (same way as it flipped the second-most-significant one half the rollover period earlier), instantly turning the stored figure from "maximum positive" to "maximum negative". (Issue rather simply dealt with in the short-term by treating it as an unsigned integer, after a minor bit of subtraction to convert the part that's counting "down" into a count "up"?)

2. The old chestnut about embedded systems in cars, planes, etc going haywire. This crap was trotted out for Y2K as well (conspicuously failing to bear fruit) and is just as false. When was the last time you set the clock on an engine management system, even after the battery had been disconnected? I know for one thing that, so long as the moving parts are in working order, the electrics haven't corroded, and you can still get hydrocarbon-based fuel and lubricant, my own 2001-vintage car will still happily run in 2040. It doesn't HAVE a clock on the ECU (the dashboard time display comes off the *trip* computer, but that merely recieves signals from the ECU (fuel use, distance etc) and the clock is set by human hand, without even a date... daylight saving adjustment is also manual). Time-of-day is IRRELEVANT to most of the mentioned systems, and as that requires extra cost and hardware/coding complexity, it is not kept track of. Those systems where it is relevant - radio beacons, GPS, more complex UI-driven operating systems etc - have their own issues usually seperate from the Unix one.

EG GPS, at least at first, rolled over about once every 5 years (256 weeks) as its calendar was based on an 8-bit week counter (and, one presumes, a higher precision timer; at least 20bit for 1-sec accuracy... probably 32bit or more for the requisite timing accuracy (1/4000th sec or better) to actually calculate distance from radio propagation delay). As it was mainly in order to keep synch with satellites that would use the same timing scheme, with a variation of 1-2 seconds being quite extreme, it didn't/doesn't actually matter that much, but it's still since been extended to be good well into the 22nd century. And I bet the time readout on your domestic GPS only uses that as a time-of-day reference, with the actual day and year being kept track of by a different subsystem not related to the core radiolocation circuits.

Other non-Unix computer systems tend NOT to use the same internal clock as it (2^31 sec since 1/1/70 being a quirk of its programmers), and it would only be an issue whilst maybe synching time or checking update timestamps against internet resources. Macs use a different calendar; as do PCs; iPhones; Palm Pilots; cellphones... etc. It'd be reasonable to assume that in-car Sat Navs, entertainment systems, aeronautical navigation rigs etc would have different calendars onboard. Additionally, it'd be a scandal of epic proportions if something was allowed up into shared airspace without being checked and cleared as safe beforehand, and certainly without a suitable failsafe (ie a regular compass and a pilot who has half an idea what they're doing, with the autopilot disengaging and sounding an alarm on encountering an error condition, such as "OMG ITS SUDDENLY 1970 AND I WAS ONLY MADE IN 2004"). Rest assurred the manufacturers are all over this stuff like a rash. I doubt it's escaped Boeing's attention that some of their pioneering 707 and 727 jets are still in service - and bearing their name - almost 40 years after manufacture (and wayyy beyond the intended service life).

Thing is, what can I put in place there? Or should the whole section just be nuked as generally false and unsalvagable? 193.63.174.11 (talk) 13:21, 12 January 2011 (UTC)

I think that anything you don't like should "just be nuked as generally false and unsalvagable[sic]". —Preceding unsigned comment added by 69.171.176.211 (talk) 19:10, 25 January 2011 (UTC)
The Y2K bug wasn't a myth. Per Y2k#Cost, "Worldwide $308 billion was estimated to have been spent on Y2K remediation." and this was for a minor date formatting error. The 2038 problem deals with how computers represent time on an internal level. Futhermore, unlike Y2K, the 2038 problem can't be solved with a simple software fix. Because all of the non-64 bit systems in 2038 will be embedded systems, they will have to be completely replaced (rip out the old electronics) at extreme cost.--RaptorHunter (talk) 21:44, 9 May 2011 (UTC)
Sure, Y2K wasn't a myth, but the people reporting what could happen was FULL of myths: airplanes falling from the sky, banking systems resetting your accounts to $0, etc. My favorite was that all the traffic lights would stop working because the systems would divide by zero and crash. The 'Vulnerable Systems' section reads the same way as the propaganda did in '99. I kept my Red Cross Y2K preparedness pamphlet for the lulz, but its important that Wikipedia doesn't become lulz-worthy with bogus information. There are issues which will be caused by the 2038 problem but most embedded systems are not among them: if something never has its date and time set then it really doesn't care about the epoch and it will continue working. The safety systems listed in the article will not be influenced by the bug since they don't care about time and should just be removed from the article. Associated with that, the 8- and 16-bit embedded system part should be dropped as well since those systems don't care about time anyway. 129.79.35.119 (talk) 18:29, 18 July 2011 (UTC)

Dubious References.

There is currently a link

* [http://www.2038.com/ 2038 Computer Bug Information Site]

that is listed in the external links section. I hope that nobody is using this link for references within the article. I suggest removal of the link. The information it provides is dubious because the article is so poorly written. It is like that nigerian bank scam . It is so poorly written that you know it must be false. Whether the information at 2038.com is correct or not, I do not know. What I do know is that it does not inspire confidence. Cliff (talk) 16:56, 9 February 2011 (UTC)

Another one:
* [http://www.2038bug.com/ The Year-2038 Bug Website]
It says nothing that the article doesn't mention already. If you look at the other posts in that blog, they are all spam-filled badly-written copy-pasted filler. --Enric Naval (talk) 19:26, 18 July 2011 (UTC)

Further explaination required

Embedded systems are most likely to be affected by the 2038 bug. […] As of 2010, most embedded systems use 8-bit or 16-bit microprocessors, even as desktop systems are transitioning to 64-bit systems.

This statement seems odd. Any 8-bit processor can handle 64-bit words the same way it can handle 32-bit words: in small chunks. So the type of processor should never be a problem. Maybe the embedded systems do not have enough RAM to store all timestamps as 64-bit words???

There is no universal solution for the Year 2038 problem. Any change to the definition of the time_t data type would result in code compatibility problems in any application in which date and time representations are dependent on the nature of the signed 32-bit time_t integer. […] Most operating systems designed to run on 64-bit hardware already use signed 64-bit time_t integers[…]

1. The easiest way to solve this problem would simply be to redefine time_t to be int64_t.
2. Applications should not depend on a specific implementation of time_t. That's unclean programming.
3. It might also affect software distributed binary-only. However, software should not be distributed binary-only.
4. In turn, the change should not affect cleanly programmed, recompiled applications.
5. On a little-endian system like the x86, the change would not cause any trouble at all. All programs that insist on time_t being 32-bit only will just ignore the extra bits. That's all.
6. The definition change of time_t on 64-bit hardware will cause exactly the same ‘problems’ it would cause on 32-bit hardware.
Am I missing something??? -- Sloyment (talk) 05:20, 9 August 2011 (UTC)
As for 1 unless you took special precautions in the process you would break binary compatibility of any library (including libc) that exported a function with time_t in it's interface. Even with special precautions (that is introducing two seperate versions of functions and modifying the headers to use the right one based on the current definition of time_t as was done for large file support) taken you would have to recompile everything that required working date/time functionality.
2, 3 and 4 are an expression of your opinion that the only software that matters is cleanly written open source software.
5 may be true for function return values but time_t is often passed arround in other ways. For example by passing a pointer to a struct timeval to a library function. If the library function uses the new definition of time_t and the program uses the old one then this will result in memory corruption. As mentioned in 1 this can be worked arround with special precuations but applying these precuations would require modifying the library.
6 is true but the key factor is that the pain is in the transition. Using 64-bit time_t from the start is nowhere near as painful (though there may still be problems with badly written software and with compatitbility mode for 32-bit applications on 64-bit systems).
-- Plugwash (talk) 15:53, 9 August 2011 (UTC)

Hmmm

This doesn't really count as a Doomsday thing, just the time reseting and data being lost. So no deaths or anything. --Sonicfan0329 (talk) 16:00, 7 September 2011 (UTC)

Formula

Why aren't we showing the formula? I think it's

1970 + ((2^32 / 2) / (60^2*24*365))

--Ysangkok (talk) 22:53, 25 October 2011 (UTC)

I'm not a computer god, but...

The image doesn't make much sense. The problem is caused by running out of bits to store data for time. E.G. we run out of room for the "1"s that replace the "0"s. Now if I understand correctly, this means that the binary digit isn't begin represented properly. In the animartion It is a Zero when the year changes to 1991, however, I don't think that should be right because the zero could still be filled by a one. Only AFTER this should the time reset occur. I might be wrong, but I think that the image has this fault. — Preceding unsigned comment added by 24.101.100.233 (talk) 04:08, 4 November 2011 (UTC)

The first bit is the sign bit. --Ysangkok (talk) 01:49, 15 November 2011 (UTC)

Paragraph removed

I removed this sentennce from the intro:

However, any other non-Unix operating systems and software that store and manipulate time this way will be just as vulnerable.

This sounds like a Unix apologist trying to make sure nobody thinks bad things about Unix. Unless some other OS can be identified that has the same bug, this line is about a hypothetical OS and it doesn't belong in the article. Comet Tuttle (talk) 15:59, 17 October 2011 (UTC)

Quite the conundrum. On the one hand, I might have removed that sentence myself, due to the fact that the article repeatedly makes it clear that the issue is about how software expects to receive its time info, and has nothing to do with Unix vs Microsoft, or other OS's. And yet, your comment makes it clear that computer illiterates, like yourself, can repeatedly miss that point, so maybe it should be put back. And finally, if I put it back I should include some sort of reference to justify it, but a link to Microsoft's support forums, where many people have asked why this or that piece of software malfunctions in Windows 7 if you push the date to 2038, would be original research. And the subject is so boring you can hardly find any news or magazine articles on it that are written in this century. What to do, what to do? Haha... while thinking about it, I read your talk page, where you try to convince your friend that IE is just as fast as Chrome. What kind of a friend are you? Why don't you just go over to his house and kill his dog while you're at it? Oh, well, I'll let it pass for now. IE is as fast as Chrome... 71.189.63.114 (talk) 10:26, 11 February 2012 (UTC)
Even if the base operating system can supports dates beyond 2038, its interaction with software can still be affected. A problem I recently ran into involved the manipulation of dates 30+ years into the future while using the built-in date and time functions of the PHP language. Various functions failed in differing unexpected ways while using a 32-bit build of PHP on an otherwise 64-bit UNIX operating system. While the problems were easily resolved by upgrading to a 64-bit build of PHP, I found the same errors exist when running PHP on Windows, even though both were running in 64-bit mode. psoplayer (talk) 12:04, 16 June 2012 (UTC)

Error in Description:

Times beyond this moment will "wrap around" and be stored internally as a negative number, which these systems will interpret as a date in 1901 rather than 2038

Im pretty sure that if the moment 'wraps around' its going to wrap around to Jan 1 1970, and start counting backwards, not jump immediately to 1901. I've never edited a wikipedia post before, so I'll just start with a comment here.

Mixologic (talk) 05:33, 13 November 2011 (UTC)

Why do you think that? Look at the image. Why wouldn't the sign bit get set? Or alternatively, what do you think would clear the sign bit? What do you mean by "count backwards"? --Ysangkok (talk) 01:55, 15 November 2011 (UTC)

Sorry - I was forgetting some basic computer science that once the sign bit is set its the *twos* compliment of the number that determines its value, not just the value with a sign added. I was thinking that 10000000 00000000 00000000 00000001 = -1, when in fact its -2147483647. Indeed, the image is correct and the wrap around moment does jump back to 1901.

Mixologic (talk) 00:10, 30 November 2011 (UTC)

This might be worth mentioning in the article actually; I had the same thought and had to consult the talk page. — Preceding unsigned comment added by 206.222.164.187 (talk) 08:47, 31 January 2012 (UTC)

Embedded Systems like.... Brakes?

Why do brakes need to know the year? — Preceding unsigned comment added by 206.180.38.20 (talk) 15:20, 18 May 2012 (UTC)

In some cases, in order to save costs, one processor is used for multiple functions. If the same processor controls your anti-lock brakes and your dashboard clock, a date bug in the clock code could have an effect on the brakes. If, on the other hand, there is a dedicated processor (or four of them) for the antilock brakes, it wouldn't normally have any code in it that handles dates. Then again, it also saves money when you re-use the same PC board for multiple functions, so a antilock brake PC board might have an unneeded real-time clock, and it might have some re-used Real-Time Operating System code that calculated dates but does nothing with the result. --Guy Macon (talk) 16:48, 18 May 2012 (UTC)

Transport Layer Security

For interest, TLS (1.0 to 1.2) contains "uint32 gmt_unix_time" (reference http://tools.ietf.org/html/rfc2246 ) Rombust (talk) 12:50, 4 October 2011 (UTC)

  • This is an issue with a number of wire formats, and was given as the reason Tru64 didn't go to a 64-bit time_t in the '90s. What is the general solution to this problem? Why shouldn't 32-bit UNIXes switch to a 64-bit time_t for compatibility? -- Resuna (talk) 11:28, 19 March 2012 (UTC)
    • For wire protocols and data files each case needs to be looked at on it's merits. If datetimes before 1970 don't need to be represented and negative numbers aren't used for error conditions then using unsigned 32-bit is an option (and apparently what ssl does). If only datetimes close to the present need to be represented then a sliding window is an option (IIRC NTP uses this technique). If neither of those are acceptable solutions then a transition to a new version with a bigger time format is required.
    • Regarding operating systems (whether 32-bit or 64-bit) changing the default definition of time_t would change the abi for any functions that pass values of that type (or of a structure containing that type). One option would be to use the same approach as with large file support introducing a paralell set of APIs and then using #DEFINE to allow programs to select which version to use. Still each piece of software would need to be checked to make sure any interface breakage was handled correctly before recompiling with the new settings.
    • -- Plugwash (talk) 18:28, 8 February 2013 (UTC)

My CORRECT edit was removed.

If I have this correct, it is able to store up to 2^31 years on construct_t without overflow. Last I checked, 2^31 was 2,147,483,647. You had it as 2,147,485,547. I corrected this. Then you uncorrected it. 74.176.77.203 (talk) 07:10, 19 March 2012 (UTC)

For anyone who has a question about the correct number, try this: http://www.google.com/search?q=2^31= (or just put 2^31= into the search box).
Answer: 2^31 = 2,147,483,648.
31 binary bits can hold 2,147,483,648 values (from 0 to 2,147,483,647)
74.176.77.203, I have changed the page to your version.
I also think you deserve an explanation. (An explanation is not an excuse.) Wikipedia gets a lot of vandalism, and someone making his first edit an edit to a page that changes a number without any explanation is usually (but not always) a vandal, and the existing number is usually correct.
In this case, the existing number was not correct, and the first-time editor was not a vandal. In this case the problem is that I failed to check the number. For that I apologize. Please don't let my screw-up sour you on editing Wikipedia; we need people who find errors, write new content, etc. Again, I am sorry that I made that bad assumption. The page has been fixed. Please look it over and correct any other errors you find. --Guy Macon (talk) 12:40, 19 March 2012 (UTC)
Please stop removing the correct value... first try to informy yourself - the value 2147485547 is definitely correct --Schily (talk) 12:48, 19 March 2012 (UTC)
No it isn't. See http://www.epochconverter.com/ which says:
2147485547 GMT: Tue, 19 Jan 2038 03:45:47 GMT
2147483647 GMT: Tue, 19 Jan 2038 03:14:07 GMT
Please explain your math before reverting again --Guy Macon (talk) 13:07, 19 March 2012 (UTC)
Please follow the WP rules. You just entered definite vandalism in the article. Is it so hard to read: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/time.h.html --Schily (talk) 13:14, 19 March 2012 (UTC)
I searched the document you linked to for the strings "547" and "647". Neither is in the document. Please cite the exact paragraph you believe contains the 2,147,485,547 number. Then read WP:NOT VANDALISM. --Guy Macon (talk) 13:28, 19 March 2012 (UTC)
  • This is clearly an epoch issue, where one (2,147,483,647) is based upon 0 AD and the other (2147485547) 1900 AD. So who has a real substantial ref for where the epoch begins with 32 bit timestamps?
Then a great big trout to the pair of you. This is too obvious to be arguing over it. Andy Dingley (talk) 13:36, 19 March 2012 (UTC)
Now it would be a nice gesture if User:Guy Macon did revert his last change in the article himself.... --Schily (talk) 13:42, 19 March 2012 (UTC)
Maybe, but neither of you are demonstrably correct until someone posts a credible ref for where the epoch begins. At this point I have no idea if he's correct or you are. Andy Dingley (talk) 13:50, 19 March 2012 (UTC)
See what happens when you explain your math instead of calling good faith edits vandalism, Schily? I just copyedited the page to remove the ambiguity. Please double check my numbers.
Here is a ref:
http://stablecross.com/files/End_Of_Time.html
"When converting from time_t to calendar form, Unix represents the year as an int (cf. tm_year in the tm struct). This imposes an upper bound on the value that can represent a year. A signed 32-bit integer has a maximum positive value of 2,147,483,647. Since mktime() and timegm() take year - 1900 as input, the largest year that can be represented is 2,147,483,647 + 1900 = 2,147,485,547." --Guy Macon (talk) 14:13, 19 March 2012 (UTC)
Added above ref to page. --Guy Macon (talk) 14:29, 19 March 2012 (UTC)
Any one with access to a linux can rerun the following, and find output being not fraud. It seems the MAX_INT+1900 is genuine problem.
$ date --date='@67768036191673199'
Wed Dec 31 23:59:59 CET 2147485547
$ date --date='@67768036191673200'
date: time 67768036191673200 is out of range
$ date -d 'Wed Dec 31 23:59:59 CET 2147485547' +%s
67768036191673199
$ date -d 'Thu Jan  1 00:00:00 CET 2147485548' +%s
-67768040609741972
$ grep INT_MAX /usr/include/limits.h 
#  define INT_MIN       (-INT_MAX - 1)
#  define INT_MAX       2147483647
#  define UINT_MAX      4294967295U
$ echo 2147485547-2147483647 | bc
1900

Sami Kerola (talk) 18:33, 6 June 2012 (UTC)

#  define INT_MIN       (-INT_MAX - 1)
#  define INT_MAX       2147483647

these 2 defines seem to be defined in the wrong order o.0 unless you want INT_MAX re-declared... (which sounds unlikely to me) Divinity76 (talk) 20:14, 17 November 2012 (UTC)

Epochs and Ticks and Bits, oh my!

The 2038 problem is primarily a UNIX problem, but it also occurs in any program that uses the same Epoch. the same Tick and the same number of bits as Unix. The program does not have to be an operating system; although many programs simply use the same format as the underlying OS, others calculate dates internally.

Here are some figures for various systems:

  Windows:    Jan 01, 1601 to Sep 14, 30828 @ 100 nanoseconds
  OpenVMS:    Nov 17, 1858 to Jul 31, 31086 @ 100 nanoseconds
  Mac OS:     Jan 01, 1904 to Feb 06,  2040 @ 1 second
  Unix/POSIX: Jan 01, 1970 to Jan 19,  2038 @ 1 second
  IBM S/390:  Jan 01, 1970 to Jan 19,  2038 @ 244.14 picoseconds
  MS-DOS:     Jan 01, 1980 to Jan 01,  2108 @ 1 second

A Google search on those dates gets you to some good sources. For example,

  Nov 17 1858 Jul 31 31086

Returns a lot of VMS documentation. --Guy Macon (talk) 15:12, 16 June 2012 (UTC)

any idea why I

— Preceding unsigned comment added by Divinity76 (talkcontribs) 20:14, 17 November 2012 (UTC)

UTC is outdated

The referenced overflow time in UTC is from a 2006 source. Since then, 2 leap seconds have been added. Does anyone have suggestions for keeping the UTC representation of the overflow up to date on this page... especially considering the fancy animation? Some magic scriptfoo in mediawiki would be nice. Krushia (talk) 03:02, 24 February 2013 (UTC)

Erm unix time is a (imperfect) representation of UTC, so the overflow date/time in UTC is fixed. Plugwash (talk) 21:28, 7 April 2013 (UTC)

Automotive and other platforms

As a real-time embedded software engineer with a lot of experience in safety-related software and automotive software in particular, I have to register an objection relating to this. Of course the article is correct that this would be a problem *IF* it happened. However this simply is not the case. Some automotive control electronics may contain a RTC to track time with ignition off, but this purely checks the *difference* between times and never the absolute time/date. Subtracting one time from another will always be successful, even if one has wrapped around, because the subtraction will also wrap around identically. As far as diagnostics goes, legislative on-board diagnostics does not consider absolute dates or times, and in fact any OBD system which did so for emissions-related faults would be illegal in most countries around the world.

I've therefore corrected this factual error. — Preceding unsigned comment added by 87.236.254.10 (talk) 17:29, 12 September 2013 (UTC)

Space aircraft and objects

What about these? I found none of these mentioned in the article, but let's see what will happen in '38...In particular I'm thinking of satellites, telescopes, and ... of course ... space probes! Whilst decade-old technology should not have any problem with their ancient CPU(s) (think the two Voyagers which now have reached interstellar space!!), I wonder what might happen with technology that was just sent into space few years ago. Maybe one controlling CPU will totally go wonky in early '38... -andy 77.190.0.72 (talk) 02:04, 14 November 2013 (UTC)

confusion under Linux

"The x32 ABI for Linux which defines an environment for programs with 32-bit addresses but running the processor in 64-bit mode also uses a 64-bit time_t. Since it was a new environment, there was no need for special compatibility precautions."

the first sentence is either missing some words, or the first "which" doesn't belong there. in either case, it doesn't explain how a 32-bit Linux system is or can be made resistant to the Y2038 problem. and there is no clear referent for the "it" in the 2nd sentence. — Preceding unsigned comment added by Jcomeau ictx (talkcontribs) 03:24, 25 December 2013 (UTC)

THE animation

It might be my wrong understanding, but just want make ensure it is not the contrary.

There is no problem that the signed binary "01111111 11111111 11111111 11111111" is equal to 2147483647 in decimal, but at the next count, it is "10000000 00000000 00000000 00000000" in binary which is, depending on definition, -0 or -1 in decimal, not -2147483648 as it appears now in the animation. After overflowing into the 32th digit, then the number counts -1, -2, ... .

Whilst 0 refers to midnight Jan 1, 1970 and I am unsure if -0 is defined, but I believe negative time is counted backwards from the epoch. So the time interpreted by the system will not be year 1901 anyhow. — Preceding unsigned comment added by 77.102.31.202 (talk) 18:56, 2 May 2013 (UTC)

read up on Two's_complement Plugwash (talk) 21:38, 2 May 2013 (UTC)
I agree with Plugwash. Two's Complement is not the only way to represent numbers, but it is a very common method. Using the first bit as a signed bit is common. Further, the animated graphic's actions, of advancing to a very negative number and then advancing towards zero, is how it would work. (I've adjusted the hyperlink in Plugwash's comment to be a more direct hyperlink.) 63.231.28.161 (talk) 22:33, 2 May 2014 (UTC)

Article Name

How about we have this article be named something other than "Year 2038 problem". Granted, at the time that this comment was first made, that is nearly 25 years into the future. However, the name is going to be inappropriate: surely the year 2038 will have many other problems than just this one. Even "Year 2038 time problem" would be far more narrow of a scope. After all, the article is supposed to be *about* how a specific implementation that experiences problems over time. The article is not supposed to *be* a specific implementation that experiences problems after a certain time.  :) I would actually prefer a much more descriptive title, like "Year 2038 time problem from 32-bit counters based on 1970 epoch", but that is unpleasantly (and arguably unusably) verbose, and I suspect that somebody can come up with a catchier name. 63.231.28.161 (talk) 22:50, 2 May 2014 (UTC)

The name parallels Year 2000 problem. Unless there is a more common way of referring to it, this is as good a name for the article as any.

potential solution

couldn't the time just be set to 38 years in the past? And before you start a flame war or something , of course you would need to display the time as if it was not reset,so the user gets the right time displayed and the applications can still run their time devices and stuff.89.244.78.212 (talk) 19:04, 10 June 2014 (UTC)

The only useful solution (that did not switch to a 64 bit time_t) I am aware of is in recent sccs implementations. It folds the time between 2038 and 2069 to 1902..1932. Schily (talk) 12:33, 11 June 2014 (UTC)

Predicting the future?

Statements like this: "MySQL database's inbuilt functions like UNIX_TIMESTAMP() will return 0 after 03:14:07 UTC on 19 January 2038."

How do we know MySQL will even exist in 2038, let alone how a particular function will behave? Maybe the article needs to be re-written with all weasel words like "might" or "could if they don't fix anything before then". 124.170.85.21 (talk) 21:05, 28 May 2014 (UTC)

How do we know MySQL will even exist? The same reason 30+ year old systems still exist now. Do you think there will be some sort of radical change in database structures that could possibly ever viably replace systems like MySQL entirely? I don't. Not many places will want to go through the process of converting their bloated databases over to a new system just because it is newer. 24.181.252.197 (talk) 02:33, 14 July 2014 (UTC)

Deleted ref to Candy Crush

An editor came by and removed my addition of ref. to Candy Crush with references.-(yet it looks like they went ahead and left the incorrect article title which btw was correctly referenced in the Candy Crush edit)--I put it back. This article is bound to grow and be used by readers in the coming years. (and maybe even people who will fix the problem).............I think that the reference to Candy Crush should remain in the article since it is a very real example of what the problem does and it is occurring now.24.0.133.234 (talk) 12:50, 4 January 2014 (UTC)

Let's try to find some more generic examples. I suggest we remove the Nexus left-over bit because honestly, workarounds are present in most OSes. 87.64.73.248 (talk) 17:32, 3 February 2014 (UTC)

I tend to agree with 24.0.133.234's position. I, for one, am no proponent of Candy Crush. However, I think it is a popular program, and so it may be a relevant example of a fairly early instance where this impacted many people who may not have experienced the problem in other ways. Though, I'm not familiar with the specific Candy Crush issue, and Candy Crush does not appear in the current version of this document, so I may not know what I'm talking about. From a comment at [blog entry] the issue appears to be that people are artificially advancing their clocks to cheat at Candy Crush, and this had led to them advancing their clocks so much that their clocks are now 25 years into the future. Actually, that's rather humorous, and I think it even (rightly or wrongly) says something about humanity... 63.231.28.161 (talk) 22:50, 2 May 2014 (UTC)

I agree with this as well - although I haven't looked at what the issue is or how advancing the clock allows one to cheat.
But it's an example of how such a time-bug could affect something that seems like it shouldn't need references to ANY amount of time longer than about an hour (how long is the typical game?) But then, I don't even know how Candy Crush is played at all, so.. (I'm starting to sound like my grandmother :) Jimw338 (talk) 18:04, 20 February 2015 (UTC)

I don't really see the point in having the Nexus 7 reference specifically, given that you can change the time on any device you can play said games on. Joho506 (talk) 19:12, 21 November 2014 (UTC)

Synthesis

The part about games/apps that impose waiting periods (in the "Early problems" section) contains what seems like synthesis. It provides a citation that such waiting periods exist, and a citation that mobile devices are affected by this issue when the date is set ahead, but no citation that anyone is actually running into this problem because of the former.

flarn2006 [u t c] time: 08:33, 9 May 2015 (UTC)

Vulnerable Systems

(Request for citation of vulnerable file types) The currently-popular rrdtool file format stores timestamps internally as 32bit time_t. This bug is listed as a problem to be solved in RRDTool 2.x (current version is 1.5) https://github.com/oetiker/rrdtool-2.x/issues/22 130.216.66.3 (talk) 23:03, 8 June 2015 (UTC)

More Widespread Implications

Shortly after we cleared all of our Y2K hurdles I told my partner that we needed to watch for the "ticks since 1970" which can be found to this date in modern high-level languages like C# and C++. In 2004 I ran the numbers and came up with the year 2038 which of course meant that we had a lot of breathing room. For some reason I thought of it again today and see that it has better acceptance but I see references to "miscellaneous, embedded, military, etc" which may be true but I am cautioning that it is more widespread than that.

An easy solution would be to move from the signed integer (limit 2147483647) to unsigned (4294967295) ... that would get us to the year 2106. — Preceding unsigned comment added by Reskin (talkcontribs) 14:40, 31 December 2014 (UTC)

Such proposals for solving a problem may be typical for people who write code in languages where the authors typically do not implement handling code for errors or corner cases.
For normal UNIX commands, the time_t range needs to cover the time 1.1.1970 00:00:00 for all timezones of the world. So time_t needs to be able to represent the value -46800. For other commands - such as the SCCS suite, an even larger time range is documented. SCCS is documented to cover the range of 1969 to 2068. This will not work if time_t is unsigned.
There is however a feasible solution available already. As POSIX defines a time range from 1969 to 2068, it would be nice if SCCS was able to cover this time range even for 32-bit programs. A useful solution was implemented in SCCS by mapping the range 2038/01/19 03:14:08 ... 2068/12/31 23:59:59 => 1901/12/13 - 1932/11/25. In other words, the range 0x80000000 ... 0xba393f8f is defined to be positive and this method allows to support a range from 1933 to 2968 with 32 bits. in a way that does not require to rewrite functions like localtime() completely. Schily (talk) 17:21, 2 January 2015 (UTC)

The interesting thing is that the C language (if strictly observed by applications) doesn't have this problem. The ISO standard for C (ISO 9899:1999, section 7.23) states that time_t's range and precision is implementation-defined. Applications should not assume any particular bit-width or units or base-epoch. It must be an arithmetic value (so larger numbers represent points further in the future than smaller numbers) but that's pretty much where the requirement stops. All of the remaining requirements are placed on the library functions that manipulate time_t, which you must use if you want to get real-time information about a time_t value. This means using difftime to compare time_t values and get a difference in seconds (simple subtraction may not work, because the units are arbitrary), mktime to construct a time_t from a struct of real-world values (which uses int for the year, with a base of 1970, so even a 16-bit system would have a 64K-year range,) gmtime and localtime to convert time_t to that struct, etc. Applications that need to store time values persistently or transmit them over a network must either use the struct tm structure or an application-specific representations - they should not assume that the definition of time_t is universally identical, or even consistent between two runs of the applications.

Unfortunately, POSIX decided to impose a specific definition, so we now have decades of software that (in full conformance to POSIX) will break in 2038 if they can't be upgraded in a way that uses a different non-portable representation.

I'm not sure if there's an appropriate place in the article to express this, but I think it should be mentioned somewhere. Shamino (talk) 14:15, 30 July 2015 (UTC)

From the perspective of the standard, time_t in theory may be a float but all currently known implementations use an integer type - in special a signed long with the exception of true64, where time_t is an int. See my comments from above why time_t needs to be able to become negative. Schily (talk) 14:37, 30 July 2015 (UTC)

From the perspective of the C-language standard, there is no problem. time_t can be a float. It can be a count of something other than seconds. It can have a different epoch. That epoch might even change with each run (e.g. time since system-start or time since application-start.) As long as the standard library functions do the right thing (and applications don't make assumptions, of course), then it's all good.

But from the perspective of the POSIX standard, there's no such luxury. It does mandate an integer count of seconds since the 1/1/1970 epoch. So any fix other than migrating to 64-bit will, by definition, violate the standard. Shamino (talk) 16:07, 30 July 2015 (UTC)

I don't get how changing time_t from unsigned to signed would magically double its number of possible values… 32 bits has 4294967296 possible values, you can't just double that by signing it, it doesn't work like that… Baptistepouget (talk) 18:06, 13 November 2016 (UTC)

Paul Ryan

Amusing though the paragraph on Paul Ryan is, the reference provided does not actually say that the 2038 problem is what caused the computer to crash. This feels like an assumption. — Preceding unsigned comment added by Steely panther (talkcontribs) 01:02, 16 June 2017 (UTC)

Tests on Windows XP 32-Bit version

My computer currently has the 32-bit version of Windows XP on it and I have just conducted a couple of experiments on it.

In my first experiment, I set the time and date to a couple of minutes before 03:14:07 UTC on 19 January 2038 and waited to see what would happen. The time just ticked on by and went past the dreaded limit without any problems whatsoever. (At least nothing I noticed.)

In my second experiment, I set the year to 2050 and I created a new text file on my desktop. The time stamp of the text file correctly showed as being in 2050. I then composed a new test email in Outlook Express and saved it into the Drafts folder. The time stamp of this new email also correctly showed as being in 2050.

From the above results, it does not appear that the 32-bit version of Windows XP will be affected all that much by the 2038 problem. I would be very grateful for any comments on what I have written here. Are there any other relevant experiments I should try?

81.149.43.187 (talk) 16:17, 28 October 2016 (UTC)

This isn't surprising. The Win32 API doesn't use a 32-bit count of seconds. The GetSystemTime function returns a pointer to a SYSTEMTIME structure, where the year is a 16-bit integer ranging between 1601 and 30827.
File-system timestamps are accessed with the GetFileTime function, which returns a pointer to a FILETIME structure. This is a 64-bit integer (represented as a structure of two 32-bit integers) tracking File Time, a count of 100ns intervals since an epoch of 12:00 A.M. January 1, 1601. As such, it has a range of approximately 58,453 years. (Of note, half of this (the maximum signed offset) is 29,226 years. Added to the 1601 epoch is 30,827 - the limit of the SYSTEMTIME structure. So the two representations are in alignment with each other.)
It is my understanding that Windows uses the File Time representation internally throughout the kernel.
Applications that work with Win32 APIs and data types should therefore not have a Y2038 bug. It is probably not necessary for applications to be concerned with these representations reaching their limits (in the year 30,827).
Of course, applications that translate Windows representations into a 32-bit count of seconds (e.g. for a 32-bit C compiler that uses a 32-bit time_t) will suffer from a Y2038 problem, but that's a problem with the compiler or application, not Windows itself.
Shamino (talk) 15:31, 28 June 2017 (UTC)

death second?

pretty much the death second for all electronics :P — Preceding unsigned comment added by 100.6.110.197 (talk) 13:55, 4 July 2017 (UTC)

I feel you bro. BTW Y2K sounded way cooler, it's a shame that they couldn't come up with cool name, perhaps U32 or (as you mentioned) Death Second. 24.46.181.234 (talk) 13:04, 16 July 2017 (UTC)

8-bit/16-bit Processors

8-bit or 16-bit embedded processors can work just fine with, e.g., 64-bit or 128-bit timestamps. An 8-bit processor is not more vulnerable than any other system, it is a software problem. — Preceding unsigned comment added by 92.218.105.200 (talk) 22:26, 19 July 2017 (UTC)

Nuclear paragraph

I removed the following paragraph which is a dubious WP:OR synthesis:

"Most worrisome are the command and control systems for nuclear weapon and power systems. These systems were largely designed and built starting in the 1970s and are still in use today. In The United States the Government Accountability Office (GAO) released a May 2016 report GAO Report on Legacy Systems noting that the Strategic Automated Command and Control System, (SACCS) which is responsible for coordinating "the operational functions of the United States’ nuclear forces, such as intercontinental ballistic missiles, nuclear bombers, and tanker support craft" "runs on an IBM Series/1 Computer". Which is a computing system built in the 1970s and uses 8-inch floppy disks. Due to state secrecy it is not publicly known what a Unix epoch date reset might do to the control logic of the defense systems of nuclear states like the United States and the former Soviet Union but the maximally negative outcome is an automated Dead Hand strike due to an epoch vulnerable component of the control system going offline and that status being perceived by the system as indicating a successful attack that must be retaliated against.[1][2]"

Neither of the references mentions Unix, much less dates. IBM Series/1 minicomputers did not use Unix and the GAO report referenced says they will be replaced by 2020.--agr (talk) 17:41, 9 February 2017 (UTC)

Agree this was definitely out of scope. — JFG talk 10:52, 10 February 2017 (UTC)
It *IS* a problem because IBM Series/1 and the 8-inch floppy disks are still using 12-bit FAT12. Hence, the Year 2038 problem. Unlike FAT32 systems that will suffer of the Year 2100 problem. --79.242.203.134 (talk) 09:45, 16 July 2017 (UTC)
For those that might want to investigate this further, most nuclear plants would be using QNX as their rtos as it was available long before Linux became a viable installation choice. You would think (but a citation is needed) that anything with an ongoing QNX support contract would have this problem dealt with. A cursory search appears to indicate that this is the case with QNX4 onward. And Red Hat would certainly be aware of this as well for anything they support. Nodekeeper (talk) 14:33, 1 January 2018 (UTC)

need explantion of time_t

or link to an article with relevant information 47.140.183.36 (talk) 16:41, 18 April 2018 (UTC)

time_t is simply an abstract type used for the time. It can be defined as either a 32-bit, 64-bit, or other type of integer via a Typedef. This method allows you to have a single keyword in the code for a type which can become different types at compile time, such as giving 64-bit machines a 64-bit integer and 32-bit ones a 32-bit integer. It's common to denote abstract types using _t at the end. For example, I've worked with an abstract real_t type which is an abstract real number type (it can be defined via typedef as float or double depending on the required precision). Aaronfranke (talk) 03:41, 17 May 2018 (UTC)
time_t, as defined by the C language spec (ISO 9899.2011, section 7.27.1) is extremely abstract. It only says that it is a "real type capable of representing times" and "The range and precision of times representable in clock_t and time_t are implementation-defined.". So as far as C is concerned, there is no "year 2038" problem, because the epoch, range and precision can be literally anything. It doesn't need to be an integer, and the epoch doesn't have to be fixed. The only requirement is that it is an arithmetic type and that the library functions that manipulate time_t (difftime, mktime, time, ctime, gmtime and localtime) all behave correctly within a single run of the program. And there have been variations over the years (e.g. some MS-DOS C compilers used the FAT filesystem's timestamp representation of 2-second intervals since a January 1, 1980 epoch in the local time zone).
The year-2038 problem exists because POSIX (probably in an effort to maintain backward compatibility with legacy software making unwarranted assumptions about the definition of time_t) defined all those attributes that C considered "implementation-defined". It defines time_t as a signed integer count of seconds since the UNIX epoch of midnight, January 1, 1970,UTC. The only thing POSIX does not specify is the width of the integer, recognizing the fact that different processors have different-size machine words. So it would be 32-bit on 32-bit systems and 64-bit on 64-bit systems. And on various (historic) architectures with other word sizes, it would be other sizes. The year-2038 problem manifested because of the tremendous amount of legacy 32-bit code and networking protocols that exist and may not be upgraded/replaced before 2038. Shamino (talk) 15:52, 17 May 2018 (UTC)

Year 2262 problem?

Some programs work with the time in nanoseconds using a 64-bit integer. In the year 2262, the time in nanoseconds since 1970 will exceed the limitations of a signed 64-bit integer. Is this worth mentioning, or perhaps worth creating another article for? Could this have a significant impact unless we store the time in nanoseconds using 128-bit integers? Aaronfranke (talk) 03:41, 17 May 2018 (UTC)

@Aaronfranke: There is currently a very brief section on this issue at Time formatting and storage bugs#Year_2262 (linked from See also). It is a long way from a full article but that would be a good place to add any additional information on this issue. -LiberatorG (talk) 04:31, 17 May 2018 (UTC)
@Aaronfranke: But note that this is not being used for the OS's internal time (e.g. time_t) so it's not likely to be as damaging as year-2038 and 32-bit integers. But you're absolutely correct that applications using this representation will need to migrate to something else within the next 244 years. One would like to hope that everything will be upgraded by then, but we've been bitten by similar assumptions in the past. Shamino (talk) 15:58, 17 May 2018 (UTC)

"Start time"

It's not made clear, or explained, why the 'start time' of "1 January 1970" was picked or came about.

Why can't other "start times" be programmed/written/coded in new programs?

Obviously, I'm not a programmer/coder and much of this article makes no sense to me.

First, it was "Y2K" (which was an easily fixed programming oversight), and now it's this. 2600:8800:784:8F00:C23F:D5FF:FEC4:D51D (talk) 05:50, 14 March 2019 (UTC)

It is arbitrary, but changing the epoch date would break a lot of software and would not really solve the problem.
The C language actually doesn't specify anything about the time_t data type other than the fact that it is "arithmetic". It could have any epoch and count any interval. The only criteria is that the library functions working on it (e.g. to decompose it into structures containing real-world values like year, day and date) do the right thing.
That having been said, UNIX systems have always used 1/1/1970 (with an increment of 1s for time_t) - probably because it is close to the date when the system was invented. And this was standardized into POSIX. As such, most software developed for UNIX assumes that this epoch will never be different. Changing it will break all that software. It would be an extreme cost to fix the code for all those programs - especially ones where the original developers haven't worked on it for a very long time, or where the code is abandoned or no longer understood.
And even if you change the epoch and fix all the software, all you do is kick the can a little further down the road, forcing another fix later on. Fixing it by using a larger integer (e.g. a 64-bit value) also requires fixing legacy software, but fixing it in that way kicks the can so far down the road (thousands of years) that it is probably reasonable to assume another fix won't be necessary within the software's lifetime. Shamino (talk) 12:35, 15 March 2019 (UTC)
Not only would changing the epoch require updating the operating system and all applications simultaneously, but times stored on disk are also often stored as a number of seconds since the epoch. So after changing the epoch from 1970 to 2000 and plugging in your external disk you might find that all of the files have timestamps 30 years off, or your accounting system may suddenly decide to send statements to your customers with all of their March 1989 account activity, because it appears to be current activity due to having the same timestamp as this month. LiberatorG (talk) 23:25, 15 March 2019 (UTC)

Undefined Behaviour

As time_t is a signed type on POSIX and overflow of signed integers is undefined behaviour in C, time doesn't necessarily to wrap around. The usual interpretation instead is that demons will fly out of your nose and the world will end. Martin.uecker (talk) 15:20, 14 October 2018 (UTC)

Addressed. --151.70.25.1 (talk) 08:10, 13 December 2019 (UTC)

1901

At Year 2038 problem/Archive 2#Network Time Protocol timestamps, However, after a wraparound the clients can still face two problems: ... 2. When a client tries to adopt this time and store it in UNIX time format, as many embedded systems do, it will fail because UNIX time starts at 13 December 1901 (signed 32 bit integer) or 1 January 1970 (unsigned 32 bit integer), which has a citation needed tag since December 2019.

The 2038 problem occurs for UNIX that has the signed (not unsigned) 32-bit time_t with an epoch of 1970-01-01.

The 4th 'graph at Year 2038 problem/Archive 2#Cause mentions (uncited) one possible implementation scenario that can cause the first second after the rollover to represent 1901-12-13 (2^31 seconds before 1970-01-01), but I don't see how the quote above can be interpreted to mean that.

The language was originally added over three years ago by an IP editor, whose first language may not have been English, here —[AlanM1(talk)]— 00:20, 16 January 2020 (UTC)

Unix-Timestamp: Signed long integer?!

Is there any source which confirms the statement: "Unix stores timestamps as signed long integers". Why should anyone use a signed integer for an unsigned timestamp?! --94.134.251.62 (talk) 00:40, 21 January 2021 (UTC)

Unix isn't my strong suit, so I don't know where to look for the source you would like. But it strikes me that once a programmer takes the trouble to learn the C library functions related to working with the system timestamps, the programmer might want to use those same functions on general-purpose date calculations, including events that occurred before 1970. To some degree, those events could be handled with negative numbers. Jc3s5h (talk) 01:00, 21 January 2021 (UTC)
Jc3s5h is on the right track. Suppose a unix system needs to store "July 20, 1969 at 02:56:15" (when Neil Armstrong set foot on the moon). This is 14,245,425 seconds before Jan 1, 1970. If time was stored as unsigned, there would be no way to store this in a unix system. But by using signed you can store this as a negative number, -14,245,425 seconds relative to Jan 1, 1970. I'm a professional software developer with lots of experience in this area. netjeff (talk) 04:29, 21 January 2021 (UTC)

time_t convert to int

I believe that changing time_t to 64 bit does not solve problem. I have already encountered problem in my program that time returned from time service is stored in int. In fact C compiler will not emit error if time_t is cast to int. This is valid. Long will be converted to int and it will work correctly until UNIX EPOCH end. — Preceding unsigned comment added by Marekmosiewicz (talkcontribs) 07:58, 17 March 2021 (UTC)

In that section , Little rip is a link that leads to nowhere and that was not deleted nor ever created? apparently? I ask the link to be revised or deleted, please. As always,--Su si eik wjywa6 (talk) 07:58, 14 June 2021 (UTC)

@Su si eik wjywa6: Wrong page - you're talking about Template:Global catastrophic risks, not about this page, so you should comment on Template talk:Global catastrophic risks, not here. Guy Harris (talk) 06:00, 15 June 2021 (UTC)
Oh, sory , i'll go comment there. — Preceding unsigned comment added by Su si eik wjywa6 (talkcontribs) 10:26, 15 June 2021 (UTC)

Not yet

This is about something in the future. Therefore, this isn't suitable for Wikipedia by WP:CRYSTALBALL. Nononsense101 (talk) 15:11, 1 October 2021 (UTC)

This article is about experts prediction, not about future. there's a difference about prediction based on calculation and farseering. its just like this article Ahendra (talk) 10:22, 21 October 2021 (UTC)

New solutions

MySQL has a fix for 64-bit platformsCite error: A <ref> tag is missing the closing </ref> (see the help page).</ref>. During editing this information feels out of place both inside "Vulnerable systems" and "Possible solutions" sections. The latter section seems to be focused on `C` and how Operational Systems are handling it. Between the two, that's the appropriate one to include the new info.

However, it's still out of place, in my opinion..

What can we do about this?

Tirinaldi (talk) 16:55, 13 November 2021 (UTC)

What about introducing a time_t offset accesible thru other system call?
Legacy time_t will be express only a relative time, for example time since system reboot (like systems without RTC which reboot at 1970 always).
API calls using time_t can have in account or not internally the offset (depending on system configuration).
Updated non-legacy software will use dates with 64-bits (or use time_t, but taking in account the offset value got from other system call).
In this way updating the system only will probably mitigate most problems. Only legacy software storing persistently 32 bit timestamps will be somewhat prone to fails and these will occur mostly only in reboot time or after it.
Another option (although more complicated and prone to more errors) is making time_t to express relative time but not since startup, but relative to another point that the system choose. The system update offset periodically, like once a year or a decade (for example: express a date in 2039: time_t in 1981 + offset in 2028).
And related to this lastone, introducing this change in networked systems which syncronize time between them should protect non-updated full-legacy systems. I think this is also posible but more delicate/complicated: legacy datagrams will use an "safe" value of time_t and a new datagram can have the offset. The problem is when to change the offset value; network operator should reset time_t whenever the whole network is in mainteinance giving the operator a huge years window between resets. In this way legacy systems will life with safe time_t and not-knowing the true date. On the other hand updated systems can have in account the offset but expose to the legacy software running on them a different pair of time_t + offset. Iagofg (talk) 10:27, 26 August 2022 (UTC)