Rewiring democracy: What impact will AI have on our country's future?
Download MP3Transcript
(Lightly edited for clarity)
INTRO: “A lie travels halfway around the world before the truth can even get its shoes on." That quote feels almost quaint in the age of artificial intelligence, where misinformation and disinformation can circle the globe in seconds – long before facts even know they’ve left the building.
Artificial intelligence is rapidly transforming the way we interact with information, but its growing influence raises critical questions about its impact on democracy. From the rapid spread of misinformation to the rise of deep-fake technology, AI is reshaping the way we process and trust information.
Today, I’m joined by CSU Associate Professor Hamed Qahri-Saremi, a computer information systems researcher whose work explores the intersection of AI, online social platforms and human behavior. We’ll discuss the benefits of this powerful tool, as well as how it may impact democratic processes, erode public trust in institutions and influence one of the most human elements of all – our sense of empathy.
HOST: Your research looks at our interactions with online social platforms and artificial intelligence systems. That's your big focus. There's a lot of concern regarding AI's potential influence, particularly through misinformation, which you also study. What are the biggest concerns regarding AI's impact today? And how might we already be seeing these challenges play out?
QAHRI-SAREMI: With respect to misinformation or disinformation, there are positive and negative sides to AI, the application of AI and the consequence of AI in that space. On the positive side, what we are seeing recently started on X, formerly Twitter, a few years back and in 2025, Meta implemented it too. They're moving away from third-party fact-checkers toward more of a user-oriented or user-centered approach to identifying or managing misinformation content, in the form of community notes.
AI can really help to identify false or misleading content faster than anything else. There is research on how we can use lateral readings or presentation of the facts using AI while their users are engaging with the claims and information on social media to help them identify the elements of falsehood in the claim and information.
So, these are really positives. These are things that you need a really powerful tool like AI to make happen. At the same time, the AI itself is at the center of concern with respect to generation of misinformation and disinformation on social media. AI is designed to generate human-like content, so the more they advance, the better they can generate human-like information, which makes misinformation and disinformation harder to detect, and even experts have challenges identifying those. So, there are positives and negatives. Definitely there are significant concerns, but there is also significant potential.
HOST: As you mentioned, AI has become very human-like. It responds in ways that are very hard to discern from human-generated responses. It's also becoming deeply embedded in our culture. This past weekend, the Super Bowl was filled with ads for AI tech, as well as ads using AI to create completely realistic ads. I mean the Jurassic Park commercial could have been right out of the movie. Now, that's all in fun, but as AI technology continues to evolve and as we see more things like deep fakes, where it's nearly impossible to tell what's real and what's not, what potential impacts could we see in the future?
QAHRI-SAREMI: On democratic processes, I can think of three ways that AI can have influence. One is definitely around disinformation and misinformation. Prior to the generation of AI tools, which really started in 2022, 2021, we already had concerns with respect to misinformation and disinformation, especially the conspiracy theories that were influencing the democratic process, the elections in the U.S. Things like pizzagate, these are things that at the center had some bad actors like QAnon creating this content. Now given the fact that AI can create more human-like content, it takes it to the next level.
One big concern is with respect to the generation of misinformation using AI and how hard it could be, even for experts, let alone the ordinary citizens, to detect this. It's not just that the information is human-like, the misinformation can be generated in very convincing manners.
The research on conspiracy theories shows that usually the most effective misinformation claims are the ones that have elements of truth in them. AI can be used very well to generate content that is very convincing, therefore more difficult for ordinary citizens to identify. Another area of concern might be the issue with respect to foreign influence, adversarial countries that can influence our democratic processes.
Again, this is a pattern that we've already seen for several years now on social media platforms in particular, where you have the conversational bots that are AI generated. They're really part of disinformation campaigns that are run by adversarial countries, campaigns run by Russia and by China, to influence our elections or democratic processes, or to at least manipulate citizens' opinions using the bots that either run a conversation or influence a conversation on social media.
Now, we have already seen that generative AI is being used by these parties, by these countries, to create AI bots that are now even more difficult to identify that these are bots, these are not users. They create very realistic personas and accounts on social media platforms. With AI, we are already seeing these patterns and moving forward it is going to probably get even worse as AI advances and becomes even more difficult to tell whether it is human.
HOST: With AI making it easier, as you said, to generate misinformation and disinformation, how do you think that affects public trust in traditionally reliable institutions, such as journalism? Even legacy media outlets are having trouble with this now.
QAHRI-SAREMI: Some of it goes really beyond just the AI. It's the issue with misinformation and how we should deal with the information. How can we empower and enable the user to not fall for misinformation? When you look at the literature, there are a variety of different techniques that have been used, starting from filtering and censoring the misinformation to labeling them and trying to make it clear to the user that this is false.
We had that quite a bit during COVID, when we had misinformation around COVID, and then labeling was used by Twitter at the time. Not many of these techniques have proven to be effective. Censoring and filtering it actually may create concerns with respect to freedom of expression. There is research showing that users do not like to be told what to believe, what not to believe.
Many users may even know that the information they are dealing with or even sharing may not be 100% true, or at least they are not sure about is truthfulness, but they still choose to share it. Not necessarily because they are bad actors, but maybe because they want others to see that perspective, because it's an interesting angle to a topic for them. That's essentially the forefront of research into misinformation. How can you empower the users to, at the collective level, reduce the propagation of the false information.
When we bring AI into it, it gets more complicated, because this content is harder to detect. When it comes to trusting institutions, that's a bigger conversation. You can think of why the users have lost trust in institutions or news agencies and so on, but one thing that is important is that these institutions be careful not to fall for the falsehoods using AI. We have seen examples of AI hallucinations finding their way into articles published by some of these agencies or mentioned by some other experts, in some of their panels and so on.
That shows you two things: One, when the expert and the news agency, whose job is really determining the veracity of information, falls for a falsehood that is generated by AI, that shows you how complex the situation is. Number two, that can really damage trust in these institutions.
Now there are other elements. I don't think it's just AI. The political environment can influence it. The polarization we're seeing definitely has a lot of influence, probably even more than AI, and where the users, the citizens, see that institution in terms of the political spectrum. But AI, unfortunately, so far, is increasing that polarization because of its effectiveness in generation of misinformation.
HOST: How do these nuances challenge our ability to detect misinformation and disinformation?
QAHRI-SAREMI: For the ordinary user, that's the main challenge. You can look at it from different perspectives. We have the whole conversation and theory in psychology about the dual process theories that we make a lot of our decisions based on information cues without engaging in systematic, thoughtful, deep processing of the full content. Mainly because we cannot afford to do deep processing of every single piece of content that we encounter in our daily lives. That's what we call the heuristic process.
When you include elements of truth, and those are usually very obvious truths, or through some of the topics that are important, you can imagine that for an ordinary citizen who may not have much information literacy in terms of how to assess such a claim that identifying those elements of truth might be enough to persuade them that this is a credible piece of information.
That makes it very difficult, because much of the new content that is presented, the falsehoods, are new for the user. It's difficult for users to assess it. But then the elements of truth are the things that users probably have seen before. Therefore, those become the information cues that the user will most likely base their decision on.
HOST: We've talked a lot about the negative influences, but maybe what are some of the positive impacts that AI could have on democratic systems and governance?
QAHRI-SAREMI: AI can definitely facilitate faster, more effective identification of misinformation and disinformation. It's really the pace and effectiveness at which this AI-generated content or AI- generated misinformation is being produced; it requires an AI tool to help us identify them. It can identify the broader democratic processes.
With respect to the security and reliability of the elections, for example, AI tools can be used to better identify, or more quickly identify, the anomalies that happen in elections. Therefore, maybe it can increase citizens' trust in the election processes and secure elections. AI can improve the government's responsiveness to the citizen's needs, whether it is just a response to a citizen or identifying needs of the citizen based on the communication that comes in from citizens.
Before AI, dealing with that load and volume of data, much of which is unstructured data, identifying and summarizing them, identifying patterns in them and so on would have been very difficult. Now I think AI tools present those opportunities for the government to be more responsive, faster in their response and maybe more efficient responding to the citizens' needs.
Also, it can help with better decision making, I believe, at the government level. For example, Congress. Many of the policies and laws that Congress considers, I think that one big question is about the consequences of these laws, the consequences of these policies. Making better decisions using the data that's available or maybe simulating some of those potential consequences. Now AI can potentially help lawmakers identify some of the main consequences that they can expect from the laws, therefore improve the decision making with respect to the laws that are passed.
HOST: You're currently also researching AI's impact on human empathy. How do those two things intersect?
QAHRI-SAREMI: That's ongoing research that I'm working on. What we have identified is that we are looking at morally sensitive decisions, the decisions where you can end up discriminating against certain people. The decision itself might appear to be rational, but it's still discrimination. For example, if you rely on assumed or presumed limitations of a certain group of people, to consider them for a specific job, despite the fact that they might otherwise look qualified.
One thing we know is that AI cannot process emotions. It expresses emotions, but these are just expressions; these are words. There's no meaning in the sense of experiencing the emotions behind it on the AI side. Our argument was that, OK, so given that one thing that AI is really good at is changing the tone and framing of the conversation, how could the same output be presented in many different ways.
One of the common applications of this conversational AI, such as ChatGPT, is just to have your email return be a little bit better, change the tone, make it more concise, make it polite, make it informal, et cetera. This is really AI communicating with the same message using many different framings. So, what we looked at was that if in a scenario where the AI is recommending an immoral decision, a decision that is clearly discriminating against a certain group of people, if that message is presented using emotional language, a language that is more empathetic versus not, how does that affect the user's reaction to it?
The initial finding shows that once you essentially use this empathetic language and AI is just expressing empathy but is still recommending a discrimination, the users become a little bit numb to that discrimination and accept the decision much more than when it is not expressed in empathy. In fact, when there is no empathy, the people are averse to the decision. But when we introduce the empathetic language, many of the users, or a significantly greater number of users, accept the decision, despite the fact that they don’t.
Then we tested a few of the tools around AI, how can you improve the explanations or explanation of the logic underlying the decision and so on. One thing we found was that the empathetic tone is strong enough that it is effective regardless. AI can recommend the discrimination in a language that the users will accept it.
Overall, that's a little bit concerning. That is the strength of these tools, that they can change the tone very effectively. But at the same time, we as the users, we have a weakness there, that once an emotional language or a compassionate language is used, we lose sight of the fact that the recommendation at the core of it is still the same recommendation. It's still recommending discrimination.
HOST: That's terrifying that they can have that kind of influence where you might not even realize you're being influenced.
QAHRI-SAREMI: One way we think about it is the role of the language. Language is such a central place, such an essential role for us as humans. Our consciousness is directly linked to the power of language. We really create concepts and create things in language. But for us, the whole language is the composition, the expression of an experience, of the processing that happens. Therefore, I think that's the meaning we give to the language.
If somebody expresses empathy, I think we see that as a person really experiencing that empathy. Therefore, they are expressing it in the language. When it comes to AI, that becomes a very important distinction that I think is part of the whole AI literacy that probably the user should be educated in is that this connection does not necessarily exist, especially when it comes as an emotional, empathetic conversation, or even moral conversation, ethical conversation.
It's not that AI necessarily, by nature, has any moral standards or ethical standards. But they can very well speak to it. They use the language around them. So, I think the meaning of the language when it comes to AI, the meaning that we as humans give to the language when it comes to AI, is different. And that's the important distinction that should be made.
HOST: We have so much information coming at us in general. We're expected to analyze a lot of information very quickly. We're drinking from the fire hose. How can we address the limitations that we have in discerning what's real and what's not?
QAHRI-SAREMI: There are different things that can be considered. One thing is the whole issue around AI literacy, educating the user. That's really at the core of our mission as an educational institution, to educate our students and generally the community around what AI is, what are the potentials of AI. And at the same time, how can they and should they use AI to their benefit and also avoid the risks associated with them.
One challenge for us is that all of us are using it, but how much AI literacy do we have? I think it's bare minimum. Even the experts don't necessarily have much AI literacy. They're just familiar with AI algorithms. That's definitely a big area to look at, to focus on, especially for an educational institution.
HOST: What steps should society take to address AI's influence as far as laws, regulations? What kind of policies? What can we do in a larger role?
QAHRI-SAREMI: One of the areas that I think is proving effective is watermarking the AI content. Laws that require the AI content generators or platforms to watermark the AI content helps users understand which content is coming from AI generators and which content has not. The EU has an AI policy that requires identification of the AI content on platforms. California has a bill that is being supported by the tech companies and many of the AI companies, including OpenAI, that essentially requires watermarking the AI content for the users, so that these users understand the source of the content. And that may help the users to treat the content differently based on different criteria.
Social media has played a huge role in increasing the overload of information that we face every day. It becomes very difficult to identify what is what, which one's coming from AI, how much AI is used. The users can be empowered and should not be left on their own.
HOST: It's amazing how it's so powerful both for good and for not so good.
QAHRI-SAREMI: One interpretation, I think it was Mark Zuckerberg if I'm not mistaken, that said it in the context of social media, that these are amplifiers. Technology just reinforces things. So, you have good and bad, and now AI tools can make the good much, much better and stronger, and it can make the bad really terrible.
Prior to AI, we had conspiracy theories, we had misinformation, we had disinformation. But AI has shown the potential to make it much worse. But at the same time, AI provides a lot of potential. That's a paradox of technology. For me as a technology scholar, what makes it very interesting is that it's not just positive or negative. It's always a paradoxical phenomenon. As a researcher, I think our focus is on improving the benefits and either restricting the negatives or helping the users at least avoid the negatives as much as possible.
OUTRO: That was CSU Associate Professor Hamed Qahri-Saremi speaking about the potential influence of artificial intelligence technology on democracy. I’m your host, Stacy Nick, and you’re listening to CSU’s The Audit.
