close
close

AI politicians: the future of democracy or a threat to freedom?

2024 is a big year for democracy, with over two billion of us voting in elections in the US, India, the EU, the UK and many other countries and territories.

But if you’re heading to the polls this year, would you consider voting for an AI candidate?

Or how about letting an AI choose the best candidate to lead you?

And have you considered to what extent AI will affect the outcome of the poll and affect the choice of voters?

These are all ways AI is playing an increasingly prominent role in elections, democracy and governance, just as it is in every other area of ​​life. So let’s explore some of the potential implications of this on the events of 2024 – the year AI and elections collide in a big way!

Virtual politicians

Virtual politicians, as well as political parties powered by AI, are now appearing in ballot boxes around the world.

In Brighton, UK, citizens will have the opportunity to vote for “AI Steve”, an avatar created by businessman Steve Endacott. Voters can chat and interact with Steve and ask him about his policies on issues ranging from local housing to LGBTQ rights.

Steve will then formulate policies based on these interactions as he tries to represent the views and values ​​of his (potential) constituents.

Obviously not everyone is convinced, with one local resident tells Reuters“AI and politicians have one thing in common: they cannot be trusted.”

Steve is not the first virtual politician. In New Zealand, SAM, created by software developer Nick Gerritsen, was built to answer voters’ questions on social media. And Alice ran against Vladimir Putin in the 2018 Russian election with the mission of creating “the political system of the future, built exclusively on rational decisions made on the basis of clear algorithms.” Alisa seems to have disappeared since then.

In Denmark, another experiment in combining AI and democracy was led by the Synthetic Party, founded by the philosopher Asker Bryld Staunæs. The party created policies via machine learning based on texts created by Danish fringe parties since the 1970s, with the aim of creating a party that would represent the views of the 20 percent of Danes who do not vote.

However, the companies that make all of this possible by building AI have proven to be a stumbling block for some AI politicians. ChatGPT creator OpenAI recently banned an AI candidate built on its technology from running in an American mayoral electionstating that it violated its user license by participating in political campaigning.

The influence of AI algorithms

Even if we wouldn’t choose to vote for an AI politician, would we consider letting AI choose who to vote for by deciding which human candidates best represent our views?

Or, to take a more sinister view – could this already be happening without our knowledge?

This is the view of Yuval Noah Harari. Harari has claimed that AI has already been instrumental in influencing our choices because of the pervasive algorithms that serve us content on social media. These algorithms, designed to keep us committed to platforms, can feed processes like confirmation bias, subtly influencing our thoughts and actions that can have significant impact on our decision-making at the ballot box.

In his book Homo DeusHarari even suggests that perhaps AI should vote for us because of its ability to deeply understand our beliefs and preferences and then match us with parties and candidates most likely to make us happy.

Deepfakes and disinformation

We have seen algorithms used to deliberately spread misinformation in previous elections. But by 2024, more people than ever have access to powerful tools and technologies that can be abused in this way.

Deepfakes in particular – synthetically generated video and audio that can mimic a real person – pose a real threat to democracy. Very convincing videos of politicians, incl Joe Biden and Rishi Sunak, has already spread far and wide. Some are humorous, ridiculous and most likely harmless, such as Nigel Farage blows up Rishi’s Minecraft house. But there is clearly a potential for damage to politicians’ reputations, especially if the content is targeted at those with low levels of technical competence.

Efforts to mitigate the threat involve technology — creating tools that can detect deep fakes and legislation — China’s recently introduced AI laws make it a crime to impersonate someone. However, education will likely be the most critical measure, to make sure the public is aware of what can be done with AI, and that not everything they see online, even in videos, is true.

AI and the future of democracy

As AI continues to become more sophisticated and pervasive, its potential to affect and perhaps compromise democracy will only increase.

While we may not yet be ready to vote for AI politicians, the concept serves as an interesting experiment in the power of technology and the direction society might be heading.

As with many other professions, AI will surely be exploited by politicians and candidates to make their jobs easier. They will be able to make data-driven decisions that align with the interests of those they represent, analyze and draft proposals, manifestos and legislation, and create personalized messages that can enable them to target individual voters more effectively.

And as voters, we will use it to gain insights into how well parties and politicians live up to the standards we expect from our elected officials.

But there are also important ethical concerns that will need to be addressed around accountability, transparency and the need for robust legislative frameworks to prevent the spread of misinformation.

By addressing these issues now, we can ensure that AI develops in a way that is beneficial to the concept of democracy as a whole, encourages politicians to take actions we agree with, and helps us all make better and more informed political choices.

Back To Top