Have we ventured too far?

Last week, Geoffrey Hinton, the expatriate British computer scientist, in an interview with the New York Times (published on Monday), revealed his growing fear with developments in the field of artificial intelligence (AI). Hinton, who is referred to as the ‘Godfather of AI’ tendered his resignation as the head of the Google Brain research department last week, and joined an ever-expanding list of worried expert voices on the future direction of AI.

Initially, Hinton studied experimental psychology at King’s College, Cambridge. As a graduate student at the University of Edinburgh in 1972, Hinton supported the concept of a neural network, an idea which failed to attract the attention of many researchers. The development of the neural network, a mathematical system which learns skills by analyzing data, became his life’s pursuit. After being awarded a PhD from Edinburgh in 1978, Hinton lectured in computer science at Carnegie Mellon University, in Pitts-burgh, USA. In the 1980s, most of the funding available for research in  AI came from the US Department of Defense, and, thus, Hinton, who is fervently opposed to the application of artificial intelligence in war, moved to Canada.

In 2018, Hinton and two of his colleagues at the University of Toronto, received the Turing Award – the ‘Nobel Prize’ of computing – for their research on neural networks. The trio created a neural network that taught itself to identify common objects such as dogs, cars, and flowers after analyzing thousands of photographs. Five years ago, tech companies such as Google, Microsoft and OpenAI started building neural networks which learned from enormous quantities of digital text called large language models (LLMs) to generate text on their own, including computer programs. This development assists computer programmers and writers to generate and execute ideas more quickly. However, experts have warned that LLMs can learn unwanted and unexpected behaviours, which can spawn false, biased and harmful information. These experts also note that as systems become more powerful they will introduce new risks.

 Hinton is clearly worried about what  his life’s work will generate from herein on. “Maybe what is going on in these systems is a lot better than what is going on in the brain. Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary,” Hinton opined.

So scary that after the San Francisco-based tech company OpenAI released a new version of ChatGPT in March, more than 1,000 (the number has since swollen to over 27,000 the New York Times reported on Monday) technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because AI presented “profound risks to society and humanity. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter read in part.

A few days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, added their voices of concern over the future of AI, in another letter. Now, having departed from Google after ten years, Hinton is heaping his concerns as well on the table.

“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton observed. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Following up on the NY Times interview, the BBC News on Monday evening, asked Hinton to elaborate on what he meant when he said “bad players” would try to use AI for “bad things”. He responded, “This is just a worst-case scenario, kind of a nightmare scenario. You can imagine, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.”

Worst-case scenario or not, the ‘Godfather of AI’ has painted a rather bleak picture of the weak grasp humanity has on the reins of control over the immediate direction and distance of AI. Hinton conceded, ”I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

Now saddled with the stark reality of his life’s work, Hinton’s words sound like the echoes of a fellow scientist, the American theoretical physicist Julius Robert Oppenheimer, one of the fathers of the invention of the atomic bomb. Just 11 days after the bombing of Hiroshima, on August 17, 1945, Oppenheimer wrote to the US government expressing his wish for the banning of nuclear weapons. He is famously attributed with uttering a quote from the Bhagavad Gita, “Now I become Death, the destroyer of  worlds.” Oppenheimer spent the latter decades of his life as a campaigner for nuclear disarmament. Hinton might not enjoy such an epoch, the genie is out of the bottle and it is too late to return it. Humanity now has another woe of its own creation on the horizon to accompany the ills of climate change.