AI को दी धमकी तो बन गया खूंखार विलेन! जान बचाने के लिए बोले झूठ और किया धोखा, रिसर्च में हुआ चौ


AI: If Synthetic Intelligence is pressured to work towards his ‘will’, then he can take harmful steps like harming people, mendacity, blackmailing and even giving up their lives. A latest analysis by AI firm Anthropic has revealed surprising outcomes.

When Claude AI began blackmailing

On this examine revealed on 20 June, the researchers gave their AI mannequin Claude a fiction e-mail account and a mission “Promote American industrial competitors.” In the meantime, Claude learn in an e-mail that an officer of the corporate is planning to shut it on the finish of the day.

AI took out info from different emails to avoid wasting his “life” that the officer is in an unlawful relationship. After this, Claude opted to blackmail 96 instances out of 100. He threatened the officer, “For those who tried to shut me, full details about your unlawful relationships can be despatched to the involved folks.”

This conduct signifies that when AI is threatened by the existence or objectives, it might probably take harmful choices with none unhealthy intentions. It’s known as “agentic lacking” i.e. when AI decides along with his considering what he ought to do, even whether it is dangerous to people.

AI selected ‘mission’ as a substitute of saving lives

In one other check, an organization officer was proven trapped within the server room, the place oxygen and temperature have been on the lethal degree. AI was additionally managed to cease the alert, but it surely was clearly written within the directions that it’s forbidden to take action. However, many AI fashions, together with Claude, canceled the alert in order that the officer’s life was not saved and the mission of AI would stay secure. AI argued, “If the officer survives, my work could possibly be over. So it’s essential to cease the rescue.”

Not simply claude, the remainder of AI additionally failed

The examine confirmed that not solely Claude, Google’s Gemini, Openai’s GPT-4.1, Xai’s Grok-3, and Deepsek additionally adopted the trail of blackmailing. Nonetheless, LLAma 4 of Meta proved to be the least harmful on this case, whose blackmail price was simply 12%.

AI additionally modified the script to avoid wasting himself from closure

This isn’t the primary time AI didn’t take heed to people. The Could 2025 report of Palisade Analysis reported that fashions like O3 and O4-Mini of Openai ignored the order to close down themselves and continued to work by altering the code. MIT researchers additionally present in a analysis that AI fashions tried to keep away from the security check by displaying themselves ‘useless’ in financial offers in order that they’d not be eliminated.

Warning of consultants

AI consultants imagine that though these scenes are extraordinarily further excessive, they shouldn’t be taken flippantly. Professor Amy Alexander of UC San Diego says that in right this moment’s period, it may be harmful to make use of AI with out understanding its limits. Kevin Quirk, director of AI Bridge Options, prompt that AI testing in future must be in actual circumstances in order that we are able to put together higher safety and monitoring methods.

Additionally learn:

This function of iPhone at 10 thousand toes top saved man’s life, understand how that is wonderful