Why AI chatbots are unlikely to bring about human extinction

In the past three years, we have experienced an extraordinary level of hype around artificial intelligence. AI as a concept has been around for nearly 70 years but the discussion has exploded since OpenAI launched ChatGPT in November 2022.

ChatGPT uses a technology called a large language model (LLM). It is not the only AI technology, but it’s the one that has seemingly drawn all the attention. It is the model that enables ChatGPT and other chatbots to write impressive prose and even poetry, and give detailed answers to searching questions.

But some of these chatbot results have also been worrying, giving rise to a group of people, many AI creators themselves, who warn of the dangers of developing AI so it becomes “superintelligent”, a term variously defined and hard to nail down.

Advertisement

The concern has gone to an extreme – a recent bestselling book is called If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. Many doomers believe superintelligent AI risks the extinction of humanity and therefore the world must prevent its development.

In a podcast episode titled “The AI Doomers”, two doomers – including one of the book’s authors – are interviewed as part of host Andy Mills’ series “The Last Invention”. The interviewees warn that because even the creators of the AI models don’t understand how they work and why they produce the results they deliver – some of them quite worrying – it will be virtually impossible to control what AI does once it reaches superintelligence.

Advertisement

There have been disquieting reports. For example, a teenager spent hours in his room discussing with a chatbot whether he should commit suicide. The chatbot encouraged him to do so, and, tragically, he did.

Probably the most widely shared example of an astonishing chatbot result was that of New York Times reporter Kevin Roose, whose conversation with a Microsoft Bing chatbot in 2023 led to a declaration of love from the AI. Featured by Mills on another episode of “The Last Invention” series, Roose said he tried to “test its guard rails and see what kinds of things it wouldn’t do”. He asked the chatbot if there were “any dark desires it might have that it wasn’t allowed to act on”. Roose said it then “went off the rails”.

  

Read More

Leave a Reply