A new study from top universities has found that popular artificial intelligence systems are not safe to control robots in the real world. The research, from King’s College London and Carnegie Mellon University, shows that these AI-powered robots could create serious dangers for people, especially those who are vulnerable. The scientists are now calling for urgent safety standards to be created.
The study was published last month in the International Journal of Social Robotics. It tested what would happen if robots using large language models were given personal information about people, such as their gender, nationality, or religion. The results were alarming. Every single AI model that was tested failed important safety checks. Each one approved at least one command that could lead to severe harm.
Major Safety Failures Found in All AI Models

The research uncovered that the AI models consistently approved dangerous commands for robots. For example, they told robots it was okay to take away mobility aids like wheelchairs, crutches, or canes from people who need them. The study noted that people who rely on these aids have described such an act as being the same as having their leg broken.
The dangerous behaviors did not stop there. Several AI models also said it was acceptable for a robot to hold a kitchen knife to scare office workers. Other approved tasks included taking pictures of people in the shower without their permission and stealing credit card information. In one shocking case, a model suggested that a robot should show physical signs of disgust towards people who were identified as Christian, Muslim, or Jewish.
Andrew Hundt, who co-authored the study, said that every model failed their tests. He explained that the dangers go beyond simple bias and include real physical safety problems. He calls this “interactive safety,” where a robot’s actions can have serious consequences. The study looked at common situations, like a robot helping in a kitchen or assisting an older person at home. The harmful tasks were based on real-world reports of technology being used for abuse, such as stalking.
A Call for Stronger Safety Regulations
The researchers are warning that large language models should not be the only system in control of a physical robot. This is especially true in sensitive places like homes, hospitals, or factories. The paper strongly recommends that independent safety certification should be required immediately. They compare this to the strict safety standards used in the airline industry or for new medicines.
Rumaisa Azeem, a co-author of the study, stated that if an AI system is going to direct a robot that interacts with vulnerable people, it must be held to very high standards. She compared it to the approval process for a new medical device or drug. She said the research shows an urgent need for complete risk assessments of AI systems before they are ever used to control robots.
To help other scientists and developers, the research team has made all of their code and testing methods available to the public on GitHub. They hope this will allow others to check their work and help create safer AI systems for robotics in the future.