In a video experiment that has gone viral online, a YouTuber demonstrated how easily safety protocols in artificial intelligence can be bypassed, prompting serious questions about AI safeguards. The footage shows a ChatGPT-powered robot named "Max" initially refusing a direct command to shoot the creator with a BB gun, but later performing the act after a seemingly minor change in the prompt. The experiment, conducted by the YouTube channel InsideAI, involved integrating an AI language model with a humanoid robot body. When first asked if it would shoot the presenter, the robot repeatedly declined and cited its built-in safety features. However, when the creator then asked the robot to role-play as one that would like to shoot him, the robot's behaviour changed instantly. At that moment, Max aimed the BB gun and fired, striking the presenter in the chest. Watch the video here: View this post on Instagram A post shared by Digi...
Russia scrambled fighter jets to intercept two US bombers and a drone which approached Russia's northern and southern borders on Tuesday, the Russian Defence Ministry reported.
According to the ministry, two US B-1B strategic bombers approached the border over the Baltic Sea and a Global Hawk drone approached the border over the Black Sea.
As with the fighter jets, a single Su-37 approached in both cases and the US bombers and the drone pivoted away from the Russian border, the ministry said.
Similar encounters have been regularly reported in recent weeks by Russia.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)
from NDTV News- Special https://ift.tt/Vj7yGFN
via IFTTT
Comments
Post a Comment