3Qs: When Web reviews aren’t what they seem

Crowd­sourced social media plat­forms are incred­ibly impor­tant for making big money deci­sions on every­thing from schools to doc­tors to which polit­ical can­di­dates to sup­port. According to one study, a single rating star on Yelp can increase a busi­ness’ rev­enue by 9 per­cent. How­ever, sub­ver­sions to those sys­tems can cause serious prob­lems. That’s exactly what hap­pens in a phe­nom­enon Christo Wilson, an assis­tant pro­fessor in the Col­lege of Com­puter and Infor­ma­tion Sci­ence, has dubbed “crowdturfing”—when crowd­sourcing goes awry. An enor­mous pool of cheap, online labor man­aged through less rep­utable ver­sions of the web­site Mechan­ical Turk allows com­pa­nies to buy opin­ions and reviews. We asked Wilson to explain the phe­nom­enon and what his team is doing in response.

Northeastern assistant professor of computer and information sciences Christo Wilson is working on novel ways to secure crowdsourced reviews, which are increasingly being purchased by companies looking to increase their ratings online. Photo by Mariah Tauger.

What strategies have your team used to secure crowdsourced review sites?

We have very good algorithms for identifying fake content and spam, and they’re a great first line of defense, but they’re not as good as a real person. When human beings look at a Facebook profile, they take everything into context: does the photo match the linguistic style of a person’s wall posts—very subtle things.

So you can throw more advanced machine learning algorithms at the problem, or you can fight fire with fire. In other words, if you can pay someone to generate bad stuff, why can’t I pay people to find it? We actually built a system on Mechanical Turk—if you have a pile of fake accounts on Facebook that you think are suspicious, you give them to a crowdsourced group of human moderators to rate them for legitimacy.

It turns out that people are actually quite good at this. So we priced it out and it would actually be reasonable, economically speaking, for a site such as Facebook to have an entirely crowdsourced system in which you take all the suspicious stuff, give it to human moderators, and they can tell you with very high accuracy whether something is real or fake.

Will this be enough?

Even if you shut down the bad sites, it’s whack-a-mole: someone will always open up a new one. So the final goal is to increase the attackers’ costs to the point where it’s no longer economical for them. Right now it costs a fraction of a penny for a single account when you buy them in bulk. But if you can identify and ban large numbers of accounts in one go, you can actually observe the prices on the market changing. All of a sudden fake accounts become very scarce so prices go way up. If you could perpetuate that and keep getting rid of these accounts, it’s no longer economical for people to buy fake accounts at scale. Hopefully you kill the market that way.

But I’m not sure this is something that will ever be completely solved, simply because this very quickly enters a gray area between what is real and what is fake. For example, right now you could buy 1,000 low-quality fake accounts. They don’t have any followers, they don’t really do anything. Let’s say I shut all of those down and I force the attacker to the point where every fake account has to look perfectly real—it’s run by a real person, that person tweets normally ever day, they have actual followers, etc. But every once in a while they do something that looks like an advertisement. Is that legitimate promotion or is it paid?

It stops being a technical issue that can be solved with algorithms and it very quickly becomes a moral, ethical debate. For example, there are plenty of businesses on Yelp that would give you a coffee if you agree to give them a five star review. Is that review then considered spam? Should I penalize the user that gives them a five-star review for receiving discounted coffee? From Yelp’s standpoint you probably should, because it’s lowering the quality of their data. But this is a real person, and they’re not actually being paid. So where do you draw the line and start discounting people’s opinions?

What are the privacy implications of crowdsourcing the moderation process?

If you’re giving content to an external moderator—paying a bunch of people to tell you whether something’s real or fake—how do you know that these people are trustworthy? That they’re not going to take private information and leak it—especially on Facebook, where we’re talking about evaluating someone’s personal profile.

If an account is set to “friends only” and we ask the friends to evaluate it, that’s okay. But a professional force of moderators doesn’t have access to everything in the account. So what is appropriate to show them? Is it appropriate to show them anything at all?

There are some clever ways to get around this. If people are flagging things as inappropriate, you constrain the set of things you show the moderator to only things that the person who did the flagging could see. The moderators should be able to make the same evaluation based on the same information; we shouldn’t have to give them anything more.

The fact of the matter is Facebook has a billion users and YouTube has a billion users. All of those people are essentially crowdsourced sensors. If they see bad stuff they can flag it, and most of these sites already have mechanisms for doing so but most people don’t use them. We need to increase awareness and transparency—the more people who get involved, the better.

 

Post a comment

Comments are moderated.
Please view our comment guidelines.

About news@Northeastern

News@Northeastern is Northeastern University’s primary source of news and information. Whether it happens in the classroom, in a laboratory, or on another continent, we bring you timely stories about every aspect of life, learning and discovery at Northeastern. Contact the news team

Subscribe to daily news email

Connect with us

Social Media Dashboard