The icon indicates free access to the linked research on JSTOR.

YouTube, a subsidiary of Google, was long unique in allowing people to upload and monetize their own content. While this may have opened up content creation to a greater number of people (YouTube has approximately 2 billion users on the service worldwide), the company has faced myriad challenges in content moderation, including recent problems with misinformation and disinformation related to the COVID-19 pandemic.

JSTOR Daily Membership AdJSTOR Daily Membership Ad

As part of a larger study of the role of internet platforms in combating pandemic-related mis/disinformation, policy analysts Spandana Singh and Koustubh “K.J.” Bagchi noted that YouTube’s website has become “a major provider” of health information. But is that information reliable?

In the opening months of the pandemic, one watchdog organization “found instances of YouTube profiting from videos pushing unproven treatments for the coronavirus,” note Singh and Bagchi. “The platform was running advertisements with videos pushing herbs, meditative music, and potentially unsafe over-the-counter supplements as cures for the coronavirus.”

At the same time, however, a parallel analysis showed that

among a sample of 320 videos related to the pandemic, four-fifths of the channels sharing coronavirus news and information are maintained by professional news outlets and that search results for popular coronavirus-related terms returned mostly factual and neutral video results.

By June 2020, YouTube had taken a number of steps “to educate users from verified sources and to dissuade misinformation attempts,” including the establishment of guidelines and restrictions for demonetizing pandemic-related content. The site also redirected users who searched for terms related to COVID-19 to the WHO and other health organizations. Singh and Bagchi also note that YouTube “committed to donating ad inventory to governments and NGOs” to use for education and the dissemination of reliable information. In addition, the company proposed expanding its use of “information panels,” digital placards that “provide users with contextual information from third-party fact-checked articles.”

As with other popular social media platforms, YouTube’s moderation system is largely automated, and its review capacity significantly diminished during the first several months of the pandemic.

“While these efforts to combat misinformation should yield positive results, YouTube has warned that the service’s reliance on automated tools may lead to an increase in erroneous removals of videos that appear to be in violation of YouTube’s policies,” explain Singh and Bagchi. And indeed, subsequent complaints about aggressive censorship and mistakenly flagged videos led to YouTube increasing the number of human moderators in September 2020.

Singh and Bagchi argue that it would be in the public’s best interest for YouTube to give periodic updates on moderation during the pandemic. Moreover, after the pandemic, the company

should create a comprehensive COVID-19 report that highlights the scope and scale of content moderation efforts during this time, and that provides data showing the amount of content that was removed as a result of automated detection as well as human flags.

The authors conclude that such transparency “will help civil society organizations and researchers further understand the use of automated tools in moderating misleading content.”


Support JSTOR Daily! Join our new membership program on Patreon today.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

How Internet Platforms Are Combating Disinformation and Misinformation in the Age of COVID-19, pp. 24–25
New America