Robot with “morals” makes surprisingly deadly decisions

Posted by K R on

Anyone excited by the idea of stepping into a driverless car should read the results of a somewhat alarming experiment at Bristol’s University of the West of England, where a robot was programmed to rescue others from certain doom… but often didn’t. The so-called ‘Ethical robot’, also known as the Asimov robot, after the science fiction writer whose work inspired the film ‘I, Robot’, saved robots, acting the part of humans, from falling into a hole: but often stood by and let them trundle into the danger zone. The experiment used robots programmed to be ‘aware’ of their surroundings, and with a separate program which instructed the robot to save lives where possible. Despite having the time to save one out of two ‘humans’ from the 'hole', the robot failed to do so more than half of the time. In the final experiment, the robot only saved the ‘people’ 16 out of 33 times. The robot’s programming mirrored science fiction writer Isaac Asimov’s First Law of Robotics, ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm.’ The robot was programmed to save humans wherever possible: and all was fine, says roboticist Alan Winfield, least to begin with. “We introduced a third robot - acting as a second proxy human. So now our ethical robot would face a dilemma - which one should it rescue?” says Winfield. The problem isn’t - thankfully - that robots are enemies of humankind, but that the robot tried too hard to save lives. Three times out of 33, the robot manages, through a cunning series of lunges, to save BOTH. The other times, it appears as if the robot can’t decide. More via Yahoo News UK.

Share this post



← Older Post Newer Post →


0 comments

Leave a comment