Silicon Valley

Facebook is apparently now assigning users a trustworthiness score as part of its effort to crack down on misinformation—here’s what that means.

Don’t panic: Such a score might seem reminiscent of an authoritarian government, but it’s really just a perfectly reasonable way to apply machine learning to the problem of fake news. Essentially, a person’s previous behavior in reporting posts as fake is considered by the algorithm that prioritizes posts for moderators. That’s necessary because people sometimes report a story as fake if they don’t agree with it.

Real dangers: Catching misinformation and preventing its spread through social media is an incredibly important issue. Misinformation spread through Facebook not only apparently swayed the last US presidential election but has led to outbreaks of mob violence in countries like Sri Lanka and Myanmar, and it appears to be behind a higher likelihood of racial violence against refugees in Germany.

More than code: It makes sense to try to automate the functions of identifying fake news and evaluating those who report it, but something more is surely needed. Facebook could probably be doing more to police potentially dangerous rumors in some places. Moreover, lessons from history suggest that new regulations and institutions may well be required to account for the power of social networks to amplify and distort information.