Social media platforms appear to have become a common lever to spread disinformation – notably in this tense geo-political context – and hence jeopardize democracies. And big social media tech companies do have a role to play to prevent this.
Last March 29, an open call was made during the second Summit for Democracy as part of the broader effort to promote democratic values and eventually fight against disinformation.
Prime-Ministers of the Republic of Moldova, Czech Republic, Estonia, Latvia, Lithuania, Poland, Slovak Republic, and Ukraine, presented a letter that they all signed, entitled “An Open Letter to Big Social Media Tech”. The latter calls big tech companies to act against foreign information manipulation and interference, notably disinformation campaigns posing a threat to democracy.
“Tech platforms like yours have become virtual battlegrounds and hostile foreign powers are using them to spread false narratives that contradict reporting from fact-based news outlets”
In other words, the letter aims to draw attention to how social media can help share fake news/narratives, destabilize countries, and weaken democracies. To illustrate this, Prime-Ministers condemn how paid ads and artificial amplification on Meta’s platforms enhance “call for violent social unrest, bring violence to the streets and destabilize governments”.
Besides, the letter also offers concrete actions and measures that social media companies could take so as to prevent the harmful behaviors cited above. The eight countries hence recommend that big social media tech companies should:
- Take steps to ensure their platforms do not advocate propaganda or disinformation related to war, war crimes, crimes against humanity, and any other types of violence. Their platforms should also not “accept payments from individuals who have been sanctioned for their actions against democracy and human rights”
- Increase cooperation and engagement with a wider range of stakeholders, including governments, civil society, experts, academia, independent media, and fact-checkers. They are essential partners to respond to potential threats. This also encompasses employing the adequate staff and financial resources to respond to the challenges of content moderation – especially in the field of hate speech where human review is essential
- Prioritize accuracy and truthfulness over engagement when promoting content. In parallel, research community should have “free or affordable access to platforms’ data to understand tactics and techniques of manipulative campaigns and hostile actors”
- Address the growing threat to democracies faced with deepfakes and other AI-generated disinformation pieces – especially hostile foreign actors.Platforms should hence invest in identification tools to spot deepfakes and automatically generated texts
The letter acknowledges that social media companies regularly update their content moderation policies, upgrade their moderation capabilities, apply content labels, or even introduce restrictions. It also highlights that “a consistent global approach to regulation – and self-regulation by big tech – is needed to respond to these issues”.
This is now up to big tech companies to decide whether they want to cooperate with the eight countries. And probably more (European) countries will join forces with the latter, so as to ensure security of both democracies and citizens.
“We urge big tech companies to join forces with democratic governments and civil society and work together to protect the integrity of information and ensure the security of our societies”