Home Gadgets and Technology AI Poses Human Extinction Risk, Sam Altman and Other Tech Leaders Says
Gadgets and Technology

AI Poses Human Extinction Risk, Sam Altman and Other Tech Leaders Says

OpenAI CEO Sam Altman

CEO of ChatGPT-maker OpenAI, Sam Altman, and executives from Google’s AI arm DeepMind and Microsoft stated in an open letter that AI poses human extinction risks. 

What is the Complete News?

Sam Altman, CEO of ChatGPT-maker OpenAI, along with several other tech leaders, claimed that Artificial Intelligence might lead to human extinction. And decreasing risks related to technology should be our global priority. 

The tech leaders mentioned these things in an open letter. In the letter, they stated that mitigating the risks of extinction from Artificial Intelligence along with other societal-scale risks such as nuclear war and pandemics should be a global priority. 

However, not only Sam Altman but also various executives from Google AI arm DeepMind and Microsoft favored this open letter. They even signed it to show their approval. 

ChatGPT

In recent months, the technology has gathered pace after chatbot ChatGPT was released in November. After the release date, it reached 100 million users in a few days. With its ability to generate human-like responses, ChatGPT has amazed a lot of researchers and the general public. 

It shows Artificial Intelligence could imitate humans and replace their jobs. However, it is being said that it is not going to be easy to voice concerns about some of advanced AI’s most severe risks.

We should aim to overcome this obstacle and open up the discussions. As some biggest companies worldwide have raced to develop rival products, ChatGPT has sparked much more awareness about Artificial Intelligence. 

In March, Altman made a statement that he is little bit scared of AI as he worries that an authoritarian government will develop the technology. Several other Tech leaders, including former Google CEO Eric Schmidt and Twitter’s owner Elon Musk, are also cautious about the risks of AI. 

Apple co-founder Steve Wozniak, Musk stated in an open letter that AI labs should pause the development of training systems from being more powerful than GPT-4. It is an OpenAI’s latest language model. In the letter, it is being said that nowadays, contemporary AI systems are becoming human-competitive. 

Lastly, the open letter asks whether we should develop nonhuman minds that might replace us, we should automate away jobs, or we should risk the loss of control of civilization.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

How Google’s “Don’t Be Evil” Motto has Evolved for the AI Age

Google’s famous motto Don’t Be Evil has been a guiding principle for...

‘Being Pragmatic’ When it comes to Fine-Tuning Rules Around AI, Margrethe Vestager Said

Margrethe Vestager, Vice President of the European Commission, said we need to...

Discrimination a Bigger Concern Than Human Extinction from AI, EU Said

Executive Vice President of the European Commission, Margrethe Vestager, said that discrimination...

AI Deepfakes Fuel Spike in Sextortion Scams, FBI Said

The Federal Bureau of Investigation warns that AI deepfakes of innocent images...