O'Neil: Maybe social media's problems can't be solved

Revelations about Russia's use of Facebook, Google and Twitter to sow discord in the U.S. have led to a lot of discussion about how to keep propaganda out of our social media feeds.|

Revelations about Russia's use of Facebook, Google and Twitter to sow discord in the U.S. have led to a lot of discussion about how to keep propaganda out of our social media feeds. Proposals include abandoning “likes,” making people use their real names and regulating, altering or even eliminating the algorithms the companies use to decide who sees what.

But what if the problem of propaganda in social media has no solution? I, for one, am beginning to think that it might be endemic.

Let's start with a definition. Social media consists of online platforms that allow people to introduce and share content. It also generates income by selling space to advertisers and, as we now know, propagandists. But even without the paid part, the users themselves would present a tremendous problem: They all have their prejudices, and they love to post things - including biased and misleading things - that confirm their worldviews. So if Russian trolls want to spread lies, they don't necessarily have to participate directly. As long as they make their content available somewhere, at least some people will willingly share it. They don't need to advertise if we do it for them. This isn't an algorithm problem. Twitter and Reddit propagate tons of fake news on the left and the right, even though neither is particularly algorithmic. It's a design problem: People are bad gatekeepers.

To be sure, tailored ads can make propaganda particularly virulent and damaging. This doesn't affect the vast majority of people who are thoughtful, reasoned and discerning in their sources of information. But it's like a drug for the people most deeply engaged in the culture wars. Just as for-profit colleges target low-information folks who are poor enough to qualify for federal aid, some political ads feed the worst quality, most manipulative information to the most vulnerable voters – the very people, in other words, who won't know better.

That said, outlawing tailored ads wouldn't solve the problem. We really need better gatekeepers.

The algorithmic gatekeeper at Facebook - also known as the newsfeed algorithm - is woefully inadequate. It's tuned to “engagement,” meaning it promotes the content that attracts the most attention. This definition of success has contributed to the rise of clickbait journalism - to the detriment of journalism that seeks to unearth new facts and present them in a complete and balanced way. It has damaged our ability to reason, weigh facts and communicate with people outside our echo chambers.

What would a better gatekeeper look like? It wouldn't be an algorithm. The best bet would be to hire a lot of actual human editors, who would follow transparent policies in deciding what is acceptable and should be amplified, as opposed to what should be censored or demoted. Platforms such as Facebook would need to pay these people, drastically reducing profit margins. If you think what I'm suggesting sounds unrealistic or impossible, I agree. I never said there would be a solution. Certainly there won't be an easy one.

Cathy O'Neil is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of “Weapons of Math Destruction.” From Bloomberg View.

UPDATED: Please read and follow our commenting policy:
  • This is a family newspaper, please use a kind and respectful tone.
  • No profanity, hate speech or personal attacks. No off-topic remarks.
  • No disinformation about current events.
  • We will remove any comments — or commenters — that do not follow this commenting policy.