OpenAI CEO Sam Altman Warns of Potential Dangers of Artificial Intelligence Technology

OpenAI CEO Sam Altman recently admitted that he was “a little bit scared” of his company’s creation, ChatGPT, and he believes that the artificial intelligence technology comes with real dangers.[0] In an interview with ABC News, Altman stressed that regulators and society need to be involved with the technology to guard against potentially negative consequences for humanity.[1]

Altman said that the technology was incredibly potent and potentially hazardous, adding that if he wasn’t scared, “you should either not trust me or be very unhappy that I’m in this job.”[2] He also warned of the dangers of large-scale disinformation and offensive cyberattacks, and said that there will be other people who don’t put some of the safety limits that OpenAI puts on their technology.[3]

In late November, OpenAI launched its A.I. chatbot ChatGPT to the public, and this week it revealed a more sophisticated version called GPT-4.[3] According to Altman, GPT-4 achieved a score in the 90th percentile on the Uniform Bar Exam, though it was “not perfect”.[4] The test-taker achieved a near-perfect score on the SAT Math test and is now capable of writing computer code in most programming languages with proficiency.[5]

Tesla and Twitter CEO Elon Musk, who was also an OpenAI cofounder—and who made a hefty donation to it—has criticized this shift, noting last month: “OpenAI was created as an open source (which is why I named it “Open[7] AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.[6][7]

Altman said he believes artificial intelligence technology will reshape society as we know it.[8] He acknowledges that it may present certain risks, but also believes that it is “the most advanced technology humanity has created” and can dramatically improve our lives.[4] Ilya Sutskever, co-founder and chief scientist of OpenAI, has mentioned that AI models could be potentially misused to inflict significant damage in the future.[9]

Altman believes that the technology should be seen as a reasoning engine, not a fact database, and it should focus on the ability to reason rather than memorize.[10] He believes over a couple of generations, humans will be able to adapt, but he worries about the effect of a rapid shift in technology.

0. “Tech guru behind ChatGPT ‘a little bit scared’ of his creation: ‘Going to eliminate a lot of current jobs’” Fox News, 17 Mar. 2023,

1. “‘We are scared’: ChatGPT creator Sam Altman warns about these dangers from AI” Business Today, 19 Mar. 2023,

2. “OpenAI CEO Worried That ChatGPT May ‘Eliminate Lot Of Current Jobs’” NDTV, 18 Mar. 2023,

3. “OpenAI CEO Sam Altman warns that other A.I. developers working on ChatGPT-like tools won’t put on safety limits—and the clock is ticking” Fortune, 19 Mar. 2023,

4. “OpenAI CEO says AI will reshape society, acknowledges risks” GMA, 17 Mar. 2023,

5. “OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: ‘A little bit scared of this’”, 16 Mar. 2023,

6. “Stop Questioning OpenAI’s Open-Source Policy” Analytics India Magazine, 17 Mar. 2023,

7. “Morgan Stanley is testing OpenAI’s chatbot that sometimes ‘hallucinates’ to see if it can help financial advisors” Yahoo Life, 14 Mar. 2023,

8. “OpenAI To Enable More Customizations For Enterprise And Individual Users” MinuteHack, 10 Mar. 2023,

9. “OpenAI’s co-founder says at some point it’ll be ‘quite easy, if one wanted, to cause a great deal of harm’ with AI models like ChatGPT” Yahoo Canada Finance, 16 Mar. 2023,

10. “‘We are a little bit scared’: OpenAI CEO warns of risks of artificial intelligence” The Guardian, 17 Mar. 2023,