Researchers from Georgia Institute of Technology, backed by money from the Air Force, ran a test to see if people trying to escape from a high-rise building would trust a robot to lead them. Overwhelmingly, the sheeple followed the little droid to their simulated deaths.
The robot tried really hard to make itself look untrustworthy. It pretended to malfunction. It led people into rooms with no exits and then walked them around in circles. It pointed participants toward a dark room blocked by furniture. Still, participants deferred to the supposed authority of the little metal homunculus.
Researchers even manufactured a moment with the participants before the experiment began: The robot was meant to lead them to a conference room but behaved erratically along the way. These people were fooled into believing the robot was broken, and still, despite this, they stuck by the robot throughout the simulated fire until the researchers had to go in, retrieve them and tell them the test was over.
“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” research engineer Paul Robinette said in a press release on the Georgia Tech website. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”