But the rush to assist using crowd­sourced, ama­teur sleuthing, also risked tar­ring inno­cent people, based on little evi­dence. A day before the FBI’s announce­ment, a social media research team at North­eastern Uni­ver­sity shut­tered plans for a crowd­sourcing inves­ti­ga­tion, amidst fears that it would be unable to pre­vent the wrong people from being impli­cated in the attack. The team planned to ask users to submit photos through a web­site, which was to be launched today, and plot the images on a map of the area around the attack. Users would then tag the photos, or areas within a pic­ture, with terms that could aid the inves­ti­ga­tion, like “person of interest” or “black bag.”

But the plan was can­celled yes­terday after con­cerns sur­faced that the team was too small to con­tin­u­ally mon­itor the project, said David Lazer, the team’s leader and a pro­fessor of com­puter sci­ence at North­eastern. Without that over­sight “it could be really bad, you could imagine iden­ti­fying a sus­pect just because they had a black back­pack,” Mr. Lazer said.

The project was sched­uled to launch today, but even if the team had decided to push it for­ward, Mr. Lazer would have had to gain approval from a uni­ver­sity ethics board and counsel. But other groups may not be so con­strained. As busi­ness and gov­ern­ment explore how to har­ness the power of crowds they will need to decide when the poten­tial harm of crowd­sourcing is too great, said K. Krasnow Waterman, a former counsel for the FBI. “The con­cept of lots of eye­balls is fan­tastic,” Ms. Krasnow Waterman said. “But you run the risk of serious harm to someone who is wrongly impli­cated by untrained ama­teur investigators.”

Read the article at The Wall Street Journal →