Y2K Bug

Twenty years ago today, at the stroke of midnight, the turn of the century, world leaders braced themselves for the start of potential chaos within their borders. Armies were on standby, as nations prepared for the possibility of massive malfunctions within industries heavily dependent on computer technology. Sources of power – nuclear plants and generating stations -, hospitals, airlines and banks garnered intense scrutiny with the approaching midnight hour, the time line for the Y2K bug.

 The Y2K Bug or Millennium Bug, as computer programmers referred to the phenomenon, was the projected fear that computer systems around the world would cease to function as the date changed to the new century. In the 1960s and 1970s, programmers developing code for complex systems used two digits to represent the year, for instance, 1960 was written as ’60.’ This coding technique was applied mainly for cost saving benefits, since scare storage space was very expensive at the time, with one kilobyte costing as much as US$100. In addition, programmers, well aware of the compromise, had envisioned that their code would have been long replaced by the turn of the century. 

In 1993, an article in the influential weekly magazine, Computerworld, entitled ‘Doomsday 2000,’ written by Peter de Jager, a Canadian IT consultant, highlighted the fact that when the date changed to 1st January, 2000, software could be confused and interpret the year as 1900, causing computers to start making errors or just simply crash. “The crisis is very real,” were the words chosen by de Jager to describe his projection. At the time mainstream awareness of a possible problem on the horizon was almost non existent. Others, such as the New York Stock Exchange had foreseen the snag as far back as 1987, and had pre-empted it with a team of 100 programmers at the seemingly exorbitant cost of US$29 million.

 By the mid-1990s, the Y2K Bug had gone global. Governments around the world began to take note of how much society had become dependent on the computer. Task forces were put together and there were massive outlays to tackle the impending bug. One estimate put the figure at US$600 billion worldwide. Solutions implemented were the development of new programmes to save the date with four digits and amending the algorithm used in calculating leap years so as to recognize 2000 as a leap year. Contingency plans were also developed for what might happen if these solutions failed.

 The 31st December, 1999 arrived. In North America concerned consumers withdrew large sums of cash from their savings accounts, hoarded foods and water whilst waiting in fear of the doomsday hour. As world leaders held their combined breath, midnight arrived in Australia, where the government had expended large sums in preparation. Computers continued to function there as per normal.

In Italy and Japan, both Y2K skeptics who had taken very few precautions and expended very little money, party goers at Millennium Balls toasted the dawn of a new century. In Italy, whose government had been accused in a BBC News report, ten and a half months earlier, of taking the ‘ostrich approach’ to the Y2K problem, computer systems were simply shut down, and later rebooted after midnight. In Japan, where many computers utilized an alternative imperial date system, there were no problems to report.

Y2K and the Millennium Bug have since become everyday catchwords for fraud and hysteria. The lingering question is, ‘Was Y2K a hoax or was the problem resolved by the massive amount of skilled preparatory work done by the contingents of coders?’ Based on Italy’s approach, in hindsight it is easy to gravitate to the idea of a scam, but on the other hand, if nothing had been done, and there had been glitches in critical areas, such as power generation or transportation, one can only imagine the resulting chaos which would have ensued. 

Twenty years later, a lasting legacy of the Y2K resolution (one way or another), is the total confidence we exercise now in computers with code and mysterious algorithms influencing every facet of our daily decision making.

Should we have interpreted the Y2K phenomenon as a warning that we were becoming too dependent on computers? We seem to be no longer capable of predicting when things will go wrong with the computer or fixing it when it does. Software glitches have been pinpointed as partly responsible for the recent crashes of the Boeing 737 Max planes in Indonesia and Ethiopia.

   Are we losing control of our lives to computers that have programmed themselves to run on apps developed by engineers who cannot fully explain their functioning?