Skip to content

Can AI-generated content be a threat to democracy?

The wider use of AI in everyday life may lead to a crisis in public awareness and knowledge, a Northeastern researcher warns, which can put democracy in danger.

The reflection of a person's eyes reading through lines of code on a screen.
Chatbots and AI agents taking over such information fields as journalism, social media moderation or polling can cause public knowledge crisis. Photo by Matthew Modoono/Northeastern University

In the not-too-distant future, most of the information people consume on the internet will be influenced by artificial intelligence, a Northeastern expert says.

And while it is impossible to slow the use of AI, it is crucial to understand AI’s limits — both what it cannot and should not do — and to adopt ethical norms for its development and deployment, says John Wihbey, an associate professor of media innovation and technology.

If not, democracy is in jeopardy, says Wihbey, the author of “AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?

Democracy today, he says, is a complex system of people collectively processing information to resolve problems. Knowledge and information that the public consumes play a key role in supporting democratic life.

Chatbots can simulate human conversation and perform routine tasks effectively, and AI agents are autonomous intelligent systems that resolve customer requests by performing tasks without human intervention. They might soon replace humans, Wihbey says, in such information fields as journalism, social media moderation and polling.

“As AI systems begin to create public narratives and begin to moderate and control public knowledge,” Wihbey says, “there could be a kind of lock-in in terms of the understanding of the world.”

AI and large language models are trained on and generate content based on past data about people’s values and interests. They will continuously reinforce past ideas and preferences, Wihbey says, creating feedback loops and echo chambers. 

This risk of feedback loops, he says, will remain recurrent. 

John Wihbey sitting by a desk
Humans need to find ways where AI doesn’t shape choices they make or preferences they express, says John Wihbey, associate professor of media innovation and technology at Northeastern. Photo by Matthew Modoono/Northeastern University

In journalism, Wihbey says, AI might be further incorporated into newsrooms to discover and verify information, categorize content, conduct large-scale analysis of social media and even generate automated coverage of events, including civic and government meetings. 

Entire municipalities or larger regions, so-called news deserts, might end up being covered by AI agents, he says.

On social media, AI moderators whose judgment is conditioned by outdated data and doesn’t align with latest human preferences, Wihbey says, might overmoderate and erase users’ posts and commentary — a vital space for modern human deliberation. 

If they can’t keep up with the fast-changing environment of human contexts, chatmods may also be subject to feedback loops. Their actions will affect what becomes public knowledge, or what humans believe to be true and worthy of attention.

AI-driven simulations in polling could distort results, affecting citizens’ conclusions. Such warped knowledge will repeatedly influence human preferences and decisions in democratic space — for example, what people believe in or who they may vote for — creating recursive spirals.

AI models, Wihbey says, intrinsically will never be able to accurately predict the public’s reaction to something or an election outcome.

“Some of the research about how AI can serve to simulate human opinion polls show that this is true where data is not well established in the model yet,” he says. “In political and social life, so much of what is important is fundamentally emergent. 

“We don’t yet know what human beings will think or do until, as individuals and as groups, we come into areas of challenge, concern or anxiety, and then we start to make individual and collective decisions.”

Further research could be extended to online search and discovery, Wihbey says. For example, Google’s new AI Overview function that consolidates a query into a single response might lead to users bypassing traditional processes of browsing, discovery, deliberation and reasoning.

Due to these limitations and incompleteness of AI models, humans should differentiate between areas where AI can facilitate collective awareness and what areas they may want to preserve as human-centered zones for independent thinking.

“At this deep level, it’s about human freedom and agency,” Wihbey says. “But I also think it’s just about humans being able to legitimately express new kinds of ideas and preferences that don’t conform to the past.”

Humans need to find ways, he says, where AI doesn’t shape choices they make or preferences they express.

“If we’re going to respect humans truly,” Wihbey says, “We have to make sure that these models are extremely modest.”

AI chatbots are already mimicking expert authority, he says, and giving answers with a significant degree of confidence, even though the answers are often not correct. 

“I just think that the models need to not pretend to be human experts in their voicing, phrasing, framing and the ways that they go about doing things,” Wihbey says. “AI should not look, feel and behave like human intelligence.”

These are just probabilistic models, Wihbey says, that pull together the data that they have been trained on. 

Governments and large institutions have a role to play, Wihbey says, in preserving democratic values by helping to address the risks of AI. At the same time, there is a danger that governments will also use AI-driven systems for their own objectives.

“Any discussion of AI, public knowledge and democracy must grapple with the wide variation in information environments across the world,” Wihbey says.