Knight's work focuses specifically on social robotics — how robots and humans interact today, and how they could interact in the future. She's already exploring these subjects practically with Marilyn Monrobot, a lab that creates robots with social sensibility that are designed to entertain an audience (here she is speaking at a TED conference with a robot that does standup comedy). She's also founder of the Robotics Film Festival and even ran the Cyborg Cabaret, a Pittsburgh-based variety show that prominently featured a number of robots as "actors."
Knight obviously cares about getting people to care about robots. Her paper is written in perfectly accessible language, and we think it's deserving of your attention. In the interest of getting you to take a look, here are a few key quotes from it that resonated with us.
Robots do not require eyes, arms
or legs for us to treat them like social agents. It turns out that we
rapidly assess machine capabilities and personas instinctively, perhaps
because machines have physical embodiments and frequently readable
objectives. Sociability is our natural interface, to each other and to
living creatures in general. As part of that innate behavior, we quickly
seek to identify objects from agents. In fact, as social creatures, it
is often our default behavior to anthropomorphize moving robots.
If we're going to easily socially bond with the robots of the future, all they need to do is move and be able to socialize with us on some level.
Will people be comfortable
getting in an airplane with no pilot, even if domestic passenger drones
have a much better safety record than human piloted commercial aviation?
Will a patient be disconcerted or pleasantly surprised by a medical
device that makes small talk, terrified or reassured by one that makes
highly accurate incisions?
It turns out that iRobot,
the manufacturers of the Packbot bomb-disposal robots, have actually
received boxes of shrapnel consisting of the robots’ remains after an
explosion with a note saying, “Can you fix it?” Upon offering to send a
new robot to the unit, the soldiers say, “No, we want that one.” That
specific robot was the one they had shared experiences with, bonded
with, and the one they did not want to “die.”
As Carnegie Mellon ethicist John
Hooker once told our Robots Ethics class, while in theory there is not a
moral negative to hurting a robot, if we regard that robot as a social
entity, causing it damage reflects poorly on us. This is not dissimilar
from discouraging young children from hurting ants, as we do not want
such play behaviors to develop into biting other children at school.
The flipside of considering human
bonding with machines is that robotic designers, and ultimately
policymakers, may need to protect users from opportunities for social
robots to replace or supplement healthy human contact or, more darkly,
retard normal development. Think about a more extreme version of how
video games are occasionally used by vulnerable populations (for
example, the socially isolated or the depressed) as an escape that may
or may not keep them from reengaging with other humans.
In other words, beware of talking to your toaster too much, no matter how engaging the conversations are.
On the other hand, it is also
possible to seek out ways of using these technologies to encourage human
connection. As some autism researchers are investigating, social robots
might help socially impaired people relate to others, practicing
empathetic behaviors as a stepping stone to normal human contact.
No comments:
Post a Comment