Sam Altman, CEO of OpenAI, and Lisa Su, CEO of Superior Micro Units, testify through the Senate Commerce, Science and Transportation Committee listening to titled “Profitable the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” in Hart constructing on Thursday, Could 8, 2025.
Tom Williams | CQ-Roll Name, Inc. | Getty Photos
In a sweeping interview final week, OpenAI CEO Sam Altman addressed a plethora of ethical and moral questions concerning his firm and the favored ChatGPT AI mannequin.
“Look, I do not sleep that nicely at night time. There’s a number of stuff that I really feel a number of weight on, however in all probability nothing greater than the truth that daily, tons of of hundreds of thousands of individuals discuss to our mannequin,” Altman instructed former Fox Information host Tucker Carlson in a virtually hour-long interview.
“I do not truly fear about us getting the massive ethical choices unsuitable,” Altman mentioned, although he admitted “perhaps we’ll get these unsuitable too.”
Somewhat, he mentioned he loses essentially the most sleep over the “very small choices” on mannequin habits, which may in the end have huge repercussions.
These choices are inclined to heart across the ethics that inform ChatGPT, and what questions the chatbot does and would not reply. This is a top level view of a few of these ethical and moral dilemmas that seem like protecting Altman awake at night time.
How does ChatGPT handle suicide?
In response to Altman, essentially the most tough problem the corporate is grappling with lately is how ChatGPT approaches suicide, in mild of a lawsuit from a household who blamed the chatbot for his or her teenage son’s suicide.
The CEO mentioned that out of the 1000’s of people that commit suicide every week, a lot of them may probably have been speaking to ChatGPT within the lead-up.
“They in all probability talked about [suicide], and we in all probability did not save their lives,” Altman mentioned candidly. “Possibly we may have mentioned one thing higher. Possibly we may have been extra proactive. Possibly we may have supplied somewhat bit higher recommendation about, hey, you want to get this assist.”
Final month, the mother and father of Adam Raine filed a product legal responsibility and wrongful loss of life swimsuit in opposition to OpenAI after their son died by suicide at age 16. Within the lawsuit, the household mentioned that “ChatGPT actively helped Adam discover suicide strategies.”
Quickly after, in a weblog put up titled “Serving to folks once they want it most,” OpenAI detailed plans to handle ChatGPT’s shortcomings when dealing with “delicate conditions,” and mentioned it will preserve bettering its expertise to guard people who find themselves at their most weak.
How are ChatGPT’s ethics decided?
One other giant subject broached within the sit-down interview was the ethics and morals that inform ChatGPT and its stewards.
Whereas Altman described the bottom mannequin of ChatGPT as educated on the collective expertise, information and learnings of humanity, he mentioned that OpenAI should then align sure behaviors of the chatbot and resolve what questions it will not reply.
“It is a actually onerous drawback. We have now a number of customers now, and so they come from very completely different life views… However on the entire, I’ve been pleasantly stunned with the mannequin’s skill to be taught and apply an ethical framework.”
When pressed on how sure mannequin specs are determined, Altman mentioned the corporate had consulted “tons of of ethical philosophers and individuals who considered ethics of expertise and methods.”
An instance he gave of a mannequin specification made was that ChatGPT will keep away from answering questions on learn how to make organic weapons if prompted by customers.
“There are clear examples of the place society has an curiosity that’s in important stress with person freedom,” Altman mentioned, although he added the corporate “will not get the whole lot proper, and likewise wants the enter of the world” to assist make these choices.
How non-public is ChatGPT?
One other huge dialogue subject was the idea of person privateness concerning chatbots, with Carlson arguing that generative AI could possibly be used for “totalitarian management.”
In response, Altman mentioned one piece of coverage he has been pushing for in Washington is “AI privilege,” which refers to the concept something a person says to a chatbot ought to be utterly confidential.
“Whenever you discuss to a health care provider about your well being or a lawyer about your authorized issues, the federal government can not get that info, proper?… I feel we should always have the identical idea for AI.”

In response to Altman, that may enable customers to seek the advice of AI chatbots about their medical historical past and authorized issues, amongst different issues. At the moment, U.S. officers can subpoena the corporate for person knowledge, he added.
“I feel I really feel optimistic that we are able to get the federal government to know the significance of this,” he mentioned.
Will ChatGPT be utilized in navy operations?
Requested by Carlson if ChatGPT could be utilized by the navy to hurt people, Altman did not present a direct reply.
“I do not know how that individuals within the navy use ChatGPT at this time… however I think there’s lots of people within the navy speaking to ChatGPT for recommendation.”
Later, he added that he wasn’t certain “precisely learn how to really feel about that.”
OpenAI was one of many AI corporations that obtained a $200 million contract from the U.S. Division of Protection to place generative AI to work for the U.S. navy. The agency mentioned in a weblog put up that it will present the U.S. authorities entry to customized AI fashions for nationwide safety, assist and product roadmap info.
Simply how highly effective is OpenAI?
Carlson, in his interview, predicted that on its present trajectory, generative AI and by extension, Sam Altman, may amass extra energy than every other individual, going as far as to name ChatGPT a “faith.”
In response, Altman mentioned he used to fret quite a bit concerning the focus of energy that might end result from generative AI, however he now believes that AI will lead to “an enormous up leveling” of all folks.
“What’s taking place now could be tons of individuals use ChatGPT and different chatbots, and so they’re all extra succesful. They’re all type of doing extra. They’re all capable of obtain extra, begin new companies, provide you with new information, and that feels fairly good.”
Nonetheless, the CEO mentioned he thinks AI will eradicate many roles that exist at this time, particularly within the short-term.

