The companies’ efforts, while beyond what they’ve done in years past, are nowhere near shutting down the online coronavirus “infodemic.” Misinformation still thrives on these sites—in one example, Consumer Reports’ Ryan Felton reported on fraudulent virus-related products on Amazon, which he found are still available even after the site’s purge; others have reported that price-gouging remains rampant on the platform, too, as do pernicious lies on Facebook and Twitter.
“I don’t think we’ve ever seen the social media world come together on an issue like this—and yet still it’s falling short,” says UW’s West.
That’s in part because the platforms’ misinformation defenses have never been tested with a crisis this fast-moving and big. Election-related skullduggery orbits one country or region at a time; other health- or science-related misinformation operates at a constant hum rather than inundating the internet all at once in the span of a few months. “There’s always been health misinformation on Facebook,” says Renee DiResta, research manager at the Stanford Internet Observatory. “But now the entire world is posting about the same thing.”
Even in an all-hands moment like this one, some efforts are controversial. For instance, Barrett says he supports removing “provably false content”—especially when health and safety are at stake. But takedowns can also backfire, DiResta says. “That then creates the perception that the information is being censored, and there’s a little bit of concern that that creates or feeds a conspiracy that the platform is trying to prevent you from knowing the truth.”
In interviews with the press, Facebook CEO Mark Zuckerberg has promised that better artificial intelligence tools are under development that could overcome the enormity of the misinformation problem. But an automated solution that works across languages and at scale is unlikely to arrive anytime soon, experts say. For now, Facebook uses AI to surface claims that need a closer look and pass them to fact-checkers, who are often overwhelmed. “This is not something AI does well,” West says. “There’s too much context and too many ways to subvert and adapt to the system.”
A Google spokesman contacted by CR pointed to Google-owned YouTube’s work to staunch misinformation as a sign of the company’s progress in this area. “In 2019 alone, we launched over 30 different changes to reduce recommendations of borderline content and harmful misinformation, including climate change misinformation and other types of conspiracy videos,” said Farshad Shadloo. “Thanks to this change, watch time this type of content gets from nonsubscribed recommendations has dropped by over 70 percent in the U.S.”
Facebook and Twitter did not respond to CR’s requests for comment on the issue.
The companies haven’t exhausted all their options. But there’s likely a ceiling to their ability to keep bad information away from their users, especially during a sudden global crisis.
“They could do more—but they can’t do everything,” says Justin Brookman, CR’s advocacy director for consumer privacy and technology. “They can’t solve for human nature; they can’t police that racist or confusing or crazy email forward from Grandma.”
Individuals can also use the “SIFT technique” to investigate questionable content. The acronym stands for Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original context. Developed by Mike Caulfield, a digital information literacy expert at Washington State University, the method can help readers separate reliable information from sketchy posts online.