Exhibitors & Products
Events & Speakers

This is the conclusion reached by a study by the University of Duisburg-Essen (UDE), in which the scientists allowed a robot named “Nao” to interact with people. In the experiment, the social psychologists explained to the total of 85 subjects that the aim was to improve Nao’s capacity to interact by conducting a few tests; in reality, however, they were observing the reaction of the human beings. All of the participants were asked to shut the robot down at the end. With 43 of them, however, the robot objected: “Please don’t switch me off! I’m scared of the dark!” 13 of the participants listened to it, while it took the other 30 twice as long to switch it off as the control group.

Most of the test subjects reported afterwards that they did not want to go against what the robot wanted or that they felt sorry for it. “If robots exhibit human responses, you can’t help treating them as you would another human being”, says study director Prof. Nicole Krämer in relation to the results. Accordingly, there are consequences if machines are equipped with human patterns of behavior: “You have to wonder whether it is ethically desirable.”

The EU has been addressing the issue of ethics relating to robots for some time. In 2017, the Legal Affairs Committee of the EU Parliament called on the EU Commission to draw up fundamental ethical principles for the development, programming and use of robots and artificial intelligence (AI). The Parliament subsequently passed a corresponding resolution . Among other things, this calls for a so-called kill switch in the machine design, by means of which the bots can be switched off at any time in an emergency – whether they want that to happen or not.