Introduction

Elections, the cornerstone of democratic societies, have always been susceptible to influence. From the era of traditional newspapers and broadcast channels to the intense campaigning tactics of the modern age, the threat of misinformation has consistently posed a risk, potentially swaying public opinion and electoral outcomes.

The emergence of social media platforms like Facebook in 2004, followed by Twitter and YouTube, opened a new chapter in this ongoing narrative, introducing an era of unregulated digital spaces where misinformation could flourish unchecked. Now, 20 years later, we’re still navigating this landscape, albeit with some regulatory measures now in place.

The Rise of Digital Influence 

Perhaps the most notorious example of this phenomenon was the role Cambridge Analytica played in influencing key political events such as the 2016 U.S. Presidential Election and the 2018 Brexit Referendum. The company’s exploitation of personal data to sway voter behaviour highlighted the ominous potential of unregulated technological influence on democracy. Picture 2

As we stand on the cusp of 2024, a pivotal year that will witness crucial elections in both the USA and the UK, the spotlight turns to artificial intelligence (AI) and its potential impact on our democratic processes. The similarities between the early days of social media and the current state of AI development are striking. Both arenas have experienced rapid growth with minimal regulatory oversight, creating fertile grounds for the spread of misinformation. A significant difference, however, lies in that AI platforms can eliminate some of the human error and manual processes, thereby enabling misinformation to spread faster and more efficiently.

Deepfakes and Democracy 

AI technologies, particularly generative AI, deepfakes, and synthetic media, have the capability to inundate our information ecosystem with fake audio, videos, images, and text of unprecedented convincingness. The recent disruptions caused by deepfakes in the Slovakian and Bangladeshi elections, along with a deepfake audio of Keir Starmer in 2023 and deepfakes of Presidential Nominees in America in 2023, offer a grim preview of the challenges we may face in the forthcoming elections.

The insights of Rory Stewart, a keen observer of political and social trends, are especially relevant in this context. He has often stressed the importance of truth and integrity in public discourse, highlighting the risks of allowing technology to compromise these foundational values.

Picture 3 As we navigate these turbulent waters, it is critical to acknowledge both the transformative potential of AI and the need for robust regulatory frameworks to ensure its ethical application. Our experiences with social media have illustrated that governmental and regulatory bodies often lag behind technological advancements. Consequently, it is imperative that we learn from these precedents to proactively address the challenges posed by AI, ensuring we do not end up in a situation of playing catch-up when it might be too late.

In the battle against misinformation, organisations like Full Fact play an indispensable role, but as individuals, we also carry the responsibility to critically assess the information we consume and share. Preserving our democratic values in the age of AI will demand vigilance, education, and a steadfast commitment to truth.

Upholding democratic values in the age of AI

In conclusion, as we stand at the threshold of AI’s integration into our daily lives, let us heed the lessons of the past and embrace a future where technology serves to enhance, rather than undermine, the foundational principles of democracy.

Join us at 4OC in shaping a future where AI strengthens, not threatens, our democracy. Give us a shout if you want to collaborate on innovative solutions.