Emil writes*
Today everybody talks about AI and many students use large language models (LLMs) such as ChatGPT. The wide-scale release of LLMs and generative AI has spurred the public discourse about the impacts of AI on society and especially on how AI might threaten democracy.
AI-generated synthetic content is flooding the public arena with false or misleading information. One infamous example is Trump’s posting of a fake image of Taylor Swift endorsing his election campaign. But generated content is only one example of how AI is threatening our democratic institutions. It is also expected to heavily disrupt the labour market, shift the balance between autocracies and democracies, and enable mass surveillance on unprecedented scales.
One of the core issues is that the speed and scale of new AI tools greatly outpaces governmental oversight and society’s ability to manage the consequences. To better assess the impacts of AI on democracy, let’s turn to Jungherr’s conceptual framework.
The definition of democracy itself might be contested, but it surely involves the idea of a government by the people and for the people. Democratic institutions are supposed to act on behalf of the people through different forms of electoral power delegation. Some people might be representatives, but the overall system is supposed to take everybody (or all eligible members of a state) equally into account.
Based on this, 4 different levels can be identified on which AI exerts different kinds of impact:
On the individual level, AI impacts both the ability of people to self-rule and the idea that self-rule is superior to other forms of decision-making processes. Self-rule here refers to the normative idea that all individuals govern themselves together without external interventions. A society without self-rule would for example be a dictator or Plato’s philosopher kings. The legitimacy of self-rule roots itself in the idea that the individual citizens can make informed decisions for themselves and their respective communities. This idea can be questioned in both directions by AI development. On one side, AI systems can lower the threshold to information access. On the other hand, AI-generated content can interfere with informed decision making. AI thus directly affects the informational foundations of self-rule. Beyond this rapid AI development might lead people to question if self-rule is the best way to govern society.
On the group level, AI impacts equality. While in reality no democracy achieves true equality, it is nevertheless a very foundational ideal. AI systems can be (mis)trained to reproduce societal biases or create new ones. The availability and kind of data play a central role in this creation of biases. After all, only what was measured in the past can be extrapolated into the future. Beyond this, the labour market impacts of AI technology can greatly increase or decrease equality depending on how the gains through AI advancement will be distributed. Here economic inequality and subsequent political disadvantages go hand in hand.
On the institutional level, AI-fuelled misinformation plays again a big role. AI systems can influence elections like never before. With such powerful systems, a group of few can try to game the institutional system meant to represent the will of the people. Another threat of AI is that it could eliminate the public perception of uncertain election results. The idea here is that institutionally a certain uncertainty is needed so that parties compete genuinely and that the public trusts the electoral process.
Lastly, on the system level, AI can reshuffle the relationship between democratic systems and other systems of governance. Here considerations about different possibilities of development and deployment of extensive AI systems matter greatly. AI can be integrated well into democracies and for example make democratic bureaucracies more efficient. However, autocratic systems can benefit immensely from the greater leeway they have concerning the privacy rights of their citizens. Autocratic leaders can collect extensively all the data they want and then use AI systems to leverage this data to tighten their grip on power and control.
Bottom Line: AI development will disrupt and impact democracy in many ways. To study these impacts is therefore vital. Untangling the different dimensions of this impact is a first and essential step for future research towards a better understanding of AI’s impact on democracy.
* Please help my Real Donut Economics** students by commenting on unclear analysis, alternative perspectives, better data sources, or maybe just saying something nice 🙂
** Why “Real”? In short, because (a) Raworth’s claims to being a “21st century economist” denies that all of her ideas were presented by others in the 20th century and (b) she presents no viable mechanisms (besides “be nice”) for achieving equality and sustainability. My students are more realistic. In long? Read this.
I really enjoyed your analysis of AI’s effect on society, and the use of a hierarchical model in doing so, emphasises the scales on which AI has become integrated into our lives. While I agree with all of your points, relating specifically to how AI has the power to destabilise democracy, I wonder how our understanding of the risks that AI poses to democracy, could be used for the converse purpose, to democratise AI. For example, there is an ongoing discourse about the integration of AI into Global Citizen Assemblies, where AI could potentially facilitate said assemblies. The democratisation of AI in this sense could induce greater inclusivity, efficiency and accountability in citizen assemblies. Could these democratising effects of AI offset their destabilising effects? I am excited to read your paper, and see whether in spite of the risks, there is room for the greater incorporation of AI into governance institutions, for example through their aforementioned use in citizen assemblies.
Let’s just hope that those AIs are not programmed to give a biased summary! (Black Mirror episodes everywhere…)
Glad that you enjoyed my posts, Sara. As you pointed out, AI systems could help significantly with facilitating broader and more inclusive (digital) citizen assemblies. Two of many cool projects leveraging AI for (e-) participation are citizens.is and make.org
However, David has a point. Especially, handling good moderation and generating unbiased summaries is a hard problem. The progress on that front is mixed so far. It is hard, because a lot of good data and training is needed, making it costly and time-intensive.
To my knowledge, the most advanced public & transparent attempt to create a moderation bot is the Kosmo project. If you are interested:
https://democracy-technologies.org/ai-data/kosmo-ai-social-moderation/