Skip to main content

Yvonne Eadon

Assistant Professor

Misinformation can cause individuals to feel fear or anxiety, and in some cases, result in mistrust from multiple  sources. It even has the power to impact an individual’s personal life. Some of the ways it has evolved on a day-to-day basis include the spread of false information through social media. The topics vary from conspiracies regarding medical experimentation, like vaccines, to political manipulation and election interference. With the upcoming presidential elections, it is crucial to understand the role misinformation has, and how to avoid it.

UKNow interviewed Yvonne Eadon, Ph.D., an assistant professor in the School of Information Science at the University of Kentucky College of Communication and Information, who has focused on assessing the impact of online disinformation and the theme of conspiracy theories within this world. To understand more about how to control the effects that false information can bring, dive into the Q&A with Eadon below.

UKNOW: What made you interested in researching misinformation?

Eadon:  I started my Ph.D. program in the fall of 2016, a tumultuous time in American and British politics that was rife with online misinformation. This was the first time that we recognized, as a society, the deleterious effects that misinformation spread could have on democratic functioning. I became really interested in misinformation broadly, and conspiracy theories specifically, because of this — once you get into it, the difficulty of pinning down the exact truth when faced with the unknown and the unknowable becomes more challenging than it seems it should be.

UKNOW: How do emotions like fear and anxiety contribute to the spread of false information?

Eadon: When you encounter something online that makes you feel anxious or fearful and it makes you want to change your behavior, be that how and what you eat, or (in other cases) how or whether or not you are voting, consider a few things. First, what about this is making you anxious? Is there something about it that taps into a larger anxiety that you have? Second, what or who is the source? Are they reputable and trustworthy? Third, do they stand to benefit or profit from your believing what they are saying? And fourth, is there a group, culture or identity that is being painted as the “villain” in the narrative you are encountering?

UKNOW: What can individuals do to manage the emotional distress that comes from disinformation?

Eadon: If you are feeling distressed because of disinformation you have encountered, I recommend taking a step back from whichever platform you may have seen it on. Disinformation is designed to heighten your emotions and platforms like TikTok, X and Instagram are designed to keep you using them for as long as possible. Taking a step back and trying to do something else — exercising, reading, watching a TV show or a movie or even doing chores — can get you out of that heightened emotional state.

It can also be highly distressing to encounter disinformation narratives from a family member who has fallen down a disinformation rabbit hole. Arguing, shaming or attempting to “debunk” these narratives is often unsuccessful and can cause your loved one to become even more attached to their beliefs. Validating the emotions they’re feeling, like fear, anxiety, anger, suspicion and overwhelm, is a good first step. Identifying with them — “I’ve believed things I’ve read on this internet too that turned out to be false; it’s nothing to be embarrassed of” — is also a good idea. Approach conversations with respect for their autonomy and intellect, first and foremost, and by trying to understand where they’re coming from. There is no guarantee that this will get them out of the rabbit hole, but having level-headed, respectful conversations is always a good idea. This can also be very emotionally taxing — so walk away when you need to.

UKNOW: How is misinformation tied to losing trust in something or someone? Do you believe there is a way to rebuild trust once it has been lost due to misinformation?

Eadon: The appeal of misinformation often derives from a loss of trust in our institutions — failures of the government, the health care system and the media (among others) to take care of and inform the U.S. citizenry have contributed to waning trust in powerful centralized entities. There can be an appeal to absorbing information from a random TikTok creator over The New York Times. In fact, misinformation can even come directly from your friends. Exhausted by institutional failures, populism becomes increasingly appealing in all senses. Restoring trust in institutions is a complex undertaking, but broadly, these systems and institutions need to prove themselves to us again. We need to feel represented in our government, we need adequate and affordable healthcare and entrenched systems of oppression need to be dismantled.

UKNOW: Which media platform do you think is the most common source of misinformation?

Eadon: Misinformation can be found on all platforms, but it is especially rampant on platforms with little to no content moderation in place (X and fringe, far-right platforms like Gab). At the same time, big companies like Meta also struggle with this. Facebook has been inundated with AI-generated images in the last couple of years. Be wary of misinformation even when using platforms that feel safe and reliable, including Google.

UKNOW: What is your insight regarding using Artificial Intelligence (AI) platforms to find information? Do you think these platforms are more susceptible to spreading misinformation?

Eadon: I wrote a paper with Dr. Francesca Tripodi at UNC Chapel Hill (currently in the process of getting published), which addresses this very question. Generative AI platforms like ChatGPT are so powerful because they use Large Language Models, or LLMs, to generate content. LLMs process extraordinarily large corpuses of data, generating text and images probabilistically — meaning, in the case of text generation, they predict which words and phrases are most likely to go together based on the data they are drawing from. This means that biases and falsehoods that are baked into the training data — which, for ChatGPT and other generative AI chatbots, is essentially all the text on the Internet — show up in the AI-generated content.

Generative AI chatbots can also “hallucinate,” creating text divorced from context that results in false answers to questions. Hallucinations can be extrinsic, in which outputs contradict the source training data, or intrinsic, in which outputs are unverifiable because they do not exist in the training data. LLMs also tend to prioritize existing information over newly introduced corrective data, potentially worsening the problem. In May of this year, people who Googled “how to make a pizza” were told by Google’s AI Overview feature (which now appears at the top of most search engine results pages) to use glue to help the cheese stick to the dough. Though this is an extreme example, it’s a good idea to fact-check information you get from an AI chatbot, as well as from AI-generated summaries at the top of search engine results pages.

UKNOW: Are there resources to determine if a photo is AI-generated?

Eadon: There are a few techniques for determining whether or not an image is AI-generated. Some techniques that have been circulated, like looking at hands, feet, shadows and other details that can be “off” are rapidly becoming less reliable as AI image generators get more and more advanced. At this particular point in time, however, zooming in on an image can still reveal strange shadows or overlapping images that are not physically possible. However, I caution against using this as your only method. If you are looking at a given photo and things feel “off,” it does not necessarily mean that they are. The closer we look at a given piece of evidence, be that a video, a photograph or a document, the more anomalies seem to appear.

If you see an image circulating online that seems noteworthy or newsworthy, use Google’s reverse image search to trace its origins — if it has not been reported on by major news outlets, consider the possibility that it is fake. Tools like Hugging Face’s AI detector and “AI or Not” are very helpful. You can drag-and-drop an image and it will do the work of determining whether or not it is real or fake.

UKNOW: Regarding the upcoming elections, what would you recommend to voters regarding disinformation that might come from social media?

Eadon: While election misinformation is not my area of expertise, I advocate for taking a step back when something you read online spikes fear or anger, especially if it is directed towards a specific community, group or culture. Look for other sources to confirm or disconfirm the information you have encountered — consulting fact-checking websites like PolitiFact can be very helpful. Challenge yourself to consider other perspectives and consider where we want to go as a society, as well as how a given candidate’s policies affect you and your loved ones.

UKNOW: What would you recommend as a reliable process for fact-checking?

Eadon: Check multiple sources — fact-checking websites like PolitiFact and Snopes are especially good for this, but in general make sure to check multiple websites. If you’re working with academic or scientific information, searching reputable research databases is a good idea as well. There are plenty of predatory journals out there that look legitimate but will publish almost anything. Librarians (at academic or public libraries) are also great resources for verifying information. A core activity of their profession is being able to search and find trustworthy information.

Connect with CI