How can we better distinguish good information from the bad? There’s a whole range of things that platforms can do to take a bit of the work off people’s shoulders.
Where does misinformation and disinformation on healthcare topics originate?
Disinformation comes from many, many different sources. If we think back to the pandemic, WHO declared an infodemic in addition to the pandemic, in addition to the health crisis. That means that in addition to the coronavirus, we also had an information crisis.
And we remember, for example, that some disinformation came to us directly from the White House, for instance when Donald Trump recommended at the time that we drink bleach to combat COVID-19. The same thing happened in Brazil with Jair Bolsonaro. So to some extent, it is governments that have spread disinformation.
But that’s not the only source. It can come from social media, from people who are uninformed but still spread their opinions. It can come from family WhatsApp channels. And it can even come from journalistic sources, if maybe some newsrooms don’t have enough reporters with health expertise, or don’t have a science team that can work with clinical studies and present them in a comprehensible way.
In this respect we’ve had a lot of uncertainty, especially during the pandemic, especially in Germany. We remember AstraZeneca and the debate around the vaccine, for example. This means that populations may be uncertain about things, and not well informed. You can easily look at that internationally, and see which countries were very well informed, and which didn’t have much disinformation at all, and where there might have been a particularly great amount of disinformation. You can look at what criteria allow communication spaces to be filled with disinformation, or ensure that trustworthy information travels from A to B.
How can the healthcare system effectively combat disinformation?
Ugh, where should I start? The hard thing about the topic of disinformation is that it’s a very holistic issue. What I mean is, you have to address many things all at once. I’ll give you an example. Take Facebook’s timeline. I’m on my Facebook timeline now. That means, two important factors determine whether I have a good information space or a bad information space. And these two factors are, on the one hand, the platform’s algorithm and the issue of how these algorithms work, what content is pushed to the top, what might be particularly strongly promoted, or just ranked lower.
On the other hand it’s me, the user, who is sitting in front of the screen and has to decide what channels I’m going to follow. Both of these parameters are very important. We know that the platform’s algorithms aren’t particularly good, and that users’ information literacy isn’t very strong either. The whole thing can only work if we have more regulation, for example, by which I mean sensible provisions that regulate the conditions under which these algorithms are allowed to operate at all.
This is ultimately a task for society as a whole, in which all elements of society really have to do their part to ensure we have a better and more resilient information ecosystem. This also applies to the healthcare sector, and to the actors who are communicating in it. And it raises the question as to whether they have enough training to be disseminating health information, on social media, for example. And what actors in the sector might be playing a role in which they’re more likely to spread disinformation? What groups exist, maybe coming from alternative medicine communities, that are then playing a big role in spreading it there? In this regard, there are many things that have to happen at the same time for the information environment to improve.
Alexander Sängerlaub is the Director and Co-founder of futur eins. He takes a holistic approach to digital public spheres and explores how the utopia of an informed society can be achieved. Previously, he helped establish the “Strengthening Digital Public Sphere” department at the Berlin think tank Stiftung Neue Verantwortung, where he led projects on disinformation (“Fake News”), fact-checking, and digital news literacy. He studied journalism, psychology, and political communication at the Freie Universität in Berlin.