A friendlier Facebook

 

Image via iStockphoto.

Image via iStockphoto.

Online social inter­ac­tions have cre­ated an entirely new way for humans to interact, and the stan­dard oper­ating pro­ce­dures don’t work. We don’t get the imme­diate feed­back we do with face-​​to-​​face interactions–cues about the other person’s reac­tion through their expres­sions and ges­tures, for example. But we sub­con­sciously rely on those cues to pro­ceed in a func­tional manner. 

I don’t remember the first time I heard the term “cyber bul­lying,” but it wasn’t too long ago and since then the phe­nom­enon has become some­what ubiq­ui­tous. It’s a lot easier to say some­thing mean online than it is in person. And the most ginor­mous social net­work out there knows it. Face­book has become a hotbed not just for sharing cat videos and the con­tents of your break­fast plate, but also inter­per­sonal conflict.

Last year, there were 219 bil­lion pho­tographs posted to Face­book last year, according to Arturo Bejar who recently spoke at North­eastern for the Emo­tion and Tech­nology con­fer­ence. “And a lot of them got reported,” he said. When given the stan­dard drop-​​down list of rea­sons for the report, these photos turned up as rep­re­senting nudity or hate speech or harass­ment. But when Bejar’s team took a closer look, they found some sur­prising stuff.

An adorable pic­ture of a husky puppy might be reported for hate speech. A photo of a fully clothed couple posing at an amuse­ment park might be reported for nudity. When the team looked even closer they found that most of the time, the person doing the reporting was actu­ally pic­tured in the photo in ques­tion and that most of the time they were friends with the person who uploaded it.

Face­book wasn’t pro­viding the appro­priate resources for people to deal with their online inter­per­sonal issues. They had to resort to the inad­e­quate resources avail­able instead. So, if, for example, an ex boyfriend posted a photo of Sally and she didn’t want it online, she’d have to use the “report” button to get it erased. But the options for why she was reporting it didn’t have any­thing accounting for her actual feel­ings. So she’d just say it was a pic­ture of nudity and move on.

Like­wise, if a fellow stu­dent posted some­thing like, “Sally was crying in school today LOL,” she wouldn’t have a good way for dealing with it other than the stan­dard pro­tocol when people “break the rules of a site,” said Bejar. “Sally was crying in school today” isn’t exactly hate speech, but there was pre­vi­ously nothing else better to describe it.

And that’s all assuming that Sally even clicks “report” in the first place. What if she were wor­ried about being “found out” or even for get­ting the other person in trouble? Maybe she’d never click at all. When it comes to cyber bul­lying, Bejar’s team found, that’s often the case.

They took a sci­en­tific approach to dealing with all these issues until they came up with a new “flow” which hi showed with pretty impres­sive data is much more effec­tive than it was a year ago. Now, instead of having a “report” button, every Face­book  simply allows you to say “I don’t want to see this.”

Depending on where you live, how old you are, and your gender, you’ll see a few dif­ferent options appear next. That’s because Bejar and his team did some pretty exhaus­tive analysis looking at how dif­ferent demo­graphics responded. Girls were more likely to report stuff than boys, and when boys did report stuff they didn’t usu­ally explain why they did so with an emo­tion word. They’d say, “none of the above,” instead.

People in dif­ferent coun­tries have dif­ferent ideas of what’s polite and so the con­tent they’ll report on will be a bit dif­ferent, and the options for dealing with it will have to be also.

The number of things go through the flow that are minor issues is huge,” said Bejar. “The number of things that go through that are very impor­tant because they’re very sen­si­tive is small. And we need to build some­thing that does a really good job in both situations.”

Ulti­mately, Face­book built an entire bul­lying pre­ven­tion hub, com­plete with down­load­able guides helping teens, par­ents, and edu­ca­tors deal with the prob­lems online and off. After all, what hap­pens on Face­book rarely stays on Face­book. If you’re made fun of by a class­mate, you’re going to see that person in school tomorrow. So maybe blocking them isn’t the best solution.

We have a respon­si­bility to pro­vide people with the tools to nav­i­gate these issues,” said Bejar. In most cases it will be as simple as pro­viding someone with the lan­guage (which has been matched to their age, gender, and cul­ture) to talk to the person who posted what­ever they’re con­cerned about. But in some cases it’ll be more serious than that. And Face­book isn’t shying away from it.

We can say all we want about the multi-​​billion user net­work being the big-​​bad cor­po­ra­tion encroaching on our pri­vacy and manip­u­lating our minds with tar­geted adver­tising, but the fact is we use it. A lot. Our social inter­ac­tions have mor­phed because of it (I saw a friend the other day who thought she’d met my hus­band before because she’d seen him on Face­book; she only real­ized she hadn’t when she heard him speak). I for one found it really refreshing to learn that they’re actu­ally putting a lot of time, money, and resources into fig­uring out how they can help facil­i­tate healthy inter­ac­tions between us all.