An action plan for tackling disinformation
How can we do more to stem the tide of disinformation and promote a thriving and inclusive digital space? Stephanie Diepeveen from the ODI’s Digital Societies Initiative explains why banning content producers is not the answer.
The speed and global spread of disinformation related to Russia-Ukraine war suggests a new and heightened global challenge for those seeking to preserve trust in factual information. A single conspiratorial post by an individual can become a key node in a viral campaign, amplified through multiple accumulating factors – including Russian state media and individual influencers, and even through the act of banning the post’s creator, by drawing attention to the conspiracy and its creator.
Responses designed to quickly shut down the spread of disinformation have added to the problem. The EU’s decision to ban Russian broadcasters Sputnik and RT was followed by ‘reciprocal’ Russian bans on foreign media, including the BBC and Deutsche Welle. Bans can result in more unattributed sources of disinformation and move its proponents onto other platforms.
Even fact checking can become a weapon to spread false information, as demonstrated by the website ‘War on Fakes’ that has been found to disseminate Russian propaganda. Clearly, there is no silver bullet to stem the tide of disinformation.
In this blog, I reflect on what opportunities exist to contribute in positive ways of creating an open, trustworthy and inclusive online information environment. Building on ODI’s research on Politics and Governance, I identify three steps towards helping navigate the global rise of disinformation:
- Accurately identify the problem
- Pay attention to global diversity
- Develop proportionate responses that preserve an open and democratic information environment
Step 1: Identify the problem
Unlike other forms of misinformation, disinformation is unique in that it is intended to deceive for a wider political end. Sometimes, this political end can be to rally support for a particular actor. However, this isn’t always the case. Around the Russia-Ukraine war, disinformation campaigns have generated scepticism about factual accounts of events, in line with longer patterns in Russian disinformation campaigns.
Rather than promote one perspective, disinformation campaigns disseminate a multitude of claims that cloud the information environment. This makes it difficult, if not impossible, for individuals to trust any account about events, and can result in greater mistrust and apathy about political events.
There has been a tendency among civil society actors and tech companies to respond to disinformation by correcting or removing content, or suspending its authors. However, this does little to address the confusion that has been generated by the presence of many competing claims.
Addressing this problem requires a different focus: one that looks to not only remove content after it appears but also fight confusion by investing in more persuasive and accurate narratives.
Step 2: Recognize the problem is not the same everywhere
Online disinformation campaigns can have a global reach and are experienced differently across contexts. Uptake of disinformation around the Russia-Ukraine war also varies according to geography, historical context and language.
Russian disinformation campaigns have found more receptive audiences in countries with greater enmity towards western states and sympathies to Russia – for example, those with strong anti-colonial and/or pro-BRICS ties like South Africa and India.
Key influencers within specific contexts can shift a disinformation narrative from marginal to more mainstream. Paying attention to specific political contexts and influential individuals can allow for a more tailored understanding of the uptake of disinformation, and a more targeted response.
Step 3. Devise proportionate interventions
There are clear risks with confronting disinformation campaigns. Arbitrary decisions about content, whether it is Meta allowing some instances of hate speech towards Russian soldiers or the EU banning RT and Sputnik broadcasts – these can generate a backlash and compromise the openness and inclusiveness of the online information environment. This can move us even further away from the vision of a democratic online information environment, already threatened by the presence of disinformation.
We therefore need to develop mitigation efforts which will respond to the short-term harms of disinformation without having a disproportionate impact on an open, inclusive and trusted internet. Consideration of longer-term effects of an intervention can help in making informed decisions, and help to establish a reference point from which to assess positive and negative impacts of interventions.
Disinformation campaigns around the Russia-Ukraine war have become a pandora’s box that escapes any attempt to return the information environment to what it was before. Mistrust and uncertainty in information, once present, are hard to pull back.
There are possibilities for moving forward. Disinformation cannot be effectively addressed through initiatives that only react to content when it appears online. The focus must be on the wider information environment and strengthening its resilience to disinformation and its effects.
This includes a renewed emphasis on the key role that independent journalism can play, providing compelling accounts of events on the ground. This also includes efforts to build local trust in accurate information and improve media literacy so that individuals can better identify disinformation.
Efforts to rebuild public trust in the information environment, and to protect and strengthen information channels that are committed to factual accounts can help to preserve openness, inclusivity and trust online beyond the current ‘information war’.
Stephanie Diepeveen is a Research Fellow at ODI and leads on its Digital Societies Initiative