What can a wide-​​eyed, talking robot teach us about trust?

A lot, according to North­eastern psy­chology pro­fessor David DeSteno, and his col­leagues, who are con­ducting inno­v­a­tive research to deter­mine how humans decide to trust strangers — and if those deci­sions are accurate.

(Read a Boston Globe article about this research.)

The inter­dis­ci­pli­nary research project, funded by the National Sci­ence Foun­da­tion (NSF), is being con­ducted in col­lab­o­ra­tion with Cyn­thia Breazeal, director of the MIT Media Lab’s Per­sonal Robots Group, Robert Frank, an econ­o­mist, and David Pizarro, a psy­chol­o­gist, both from Cornell.

The researchers are exam­ining whether non­verbal cues and ges­tures could affect our trust­wor­thi­ness judg­ments. “People tend to mimic each other’s body lan­guage,” said DeSteno, “which might help them develop intu­itions about what other people are feeling — intu­itions about whether they’ll treat them fairly.”

This project tests their the­o­ries by having humans interact with the social robot, Nexi, in an attempt to judge her trust­wor­thi­ness. Unbe­knownst to par­tic­i­pants, Nexi has been pro­grammed to make ges­tures while speaking with selected par­tic­i­pants — ges­tures that the team hypoth­e­sizes could deter­mine whether or not she’s deemed trustworthy.

Using a humanoid robot whose every expres­sion and ges­ture we can con­trol will allow us to better iden­tify the exact cues and psy­cho­log­ical processes that underlie humans’ ability to accu­rately pre­dict if a stranger is trust­worthy,” said DeSteno.

During the first part of the exper­i­ment, Nexi makes small talk with her human coun­ter­part for 10 min­utes, asking and answering ques­tions about topics such as trav­eling, where they are from and what they like most about living in Boston.

The goal was to sim­u­late a normal con­ver­sa­tion with accom­pa­nying move­ments to see what the mind would intu­itively glean about the trust­wor­thi­ness of another,” said DeSteno.

The par­tic­i­pants then play an eco­nomic game called “Give Some,” which asks them to deter­mine how much money Nexi might give them at the expense of her indi­vidual profit. Simul­ta­ne­ously, they decide how much, if any, they’ll give to Nexi. The rules of the game allow for two dis­tinct out­comes: higher indi­vidual profit for one and loss for the other, or rel­a­tively smaller and equal profits for both partners.

Trust might not be deter­mined by one iso­lated ges­ture, but rather a ‘dance’ that hap­pens between the strangers, which leads them to trust or not trust the other,” said DeSteno, who, with his col­leagues, will con­tinue testing their the­o­ries by seeing if Nexi can be taught to pre­dict the trust­wor­thi­ness of human partners.