As reported in the prestigious medical journal BMJ, “Facebook has removed 16 million pieces of its content and added warnings to around 167 million. YouTube has removed more than 850 000 videos related to ‘dangerous or misleading covid-19 medical information.’” See more of TrialSite’s thoughts on the matter here.
At some point last year, a number of interests from media companies and Big Tech to government agencies formed a consensus on what is considered misinformation versus what is fact. But there’s just one problem. Not all of what this information consortium determines to be misinformation turns out to be untrue. So how can this be? How can the fact checkers actually in some cases serve to perpetuate misinformation themselves?
Sander van der Linden, professor of social psychology in society at Cambridge University in the UK, reports “I think it’s quite dangerous for scientific content to be labeled as misinformation, just because of the way people might perceive that.” He continued, “Even though it might fit under a definition [of misinformation] in a very technical sense, I’m not sure if that’s the right way to describe it more generally because it...
Note: If you need assistance with your subscription or would like to discuss a corporate subscription for more than 10 employees please contact us or use the chat (bottom right).