Research Stories
A research team led by Professor Joo-Wha Hong in the School of Convergence analyzed readers’ trust and perceptions toward political news articles written by artificial intelligence. The study found that while AI journalists were perceived as more objective and less biased than human journalists, readers nonetheless placed greater trust in articles written by humans. Moreover, evaluations of news credibility depended on how much readers trusted the objectivity of machines, and the relative hostile media effect, in which trust declines as readers’ political orientation diverges from that of the news outlet, was also observed for AI-written articles.
Convergence
Prof.
HONG, JOO WHA
Professor Joo-Wha Hong, together with Professor Herbert Chang of Dartmouth College and Professor David Tewksbury of the University of Illinois at Urbana-Champaign, published this study in Digital Journalism, a leading international journal in the field of communication.
The research experimentally examined how audiences perceive and evaluate both AI- and human-authored political news. Grounded in the theoretical frameworks of the machine heuristic and the hostile media effect, the study investigated how the author’s identity (human vs. AI) and readers’ political orientations influence perceptions of news credibility and journalist credibility.
The experiment was conducted with 442 adult participants in the United States through an online survey. Participants were randomly assigned to read articles on four politically sensitive issues (i.e., abortion law, gun control, minimum wage, and health-care reform) authored either by AI or human journalists and published under three different news outlets representing distinct ideological leanings (i.e., liberal, neutral, and conservative). This design enabled the researchers to analyze interactions among author identity, reader ideology, and political distance from the media outlet.
Results revealed that readers trusted and evaluated human-written articles more favorably, yet perceived AI journalists as less biased and more neutral. In other words, while readers regarded AI as a “more objective and emotionally detached journalist,” they still tended to view AI as biased when its political stance differed from their own—a sign of an ambivalent attitude toward machine-generated content. Furthermore, evaluations of AI journalists were moderated by the extent to which individuals accepted AI as a rational and objective actor. The relative hostile media effect also appeared for AI: as the political distance between readers and news outlets increased, trust and likability decreased, regardless of whether the article was written by a human or AI.
This study highlights that artificial intelligence is not merely a technical substitute for human journalists but a new social actor capable of shaping public trust and political perceptions. It empirically demonstrates how audiences cognitively process AI-generated content and how their beliefs about AI’s capabilities influence evaluations of information credibility.
Looking ahead, the research team plans to extend this line of inquiry to examine public responses to AI’s broader social roles, such as counselor, educational partner, creative collaborator, and conversational companion. These efforts aim to provide crucial insights into the social legitimacy and trust-building mechanisms of AI as it becomes increasingly embedded in human social life.
※ Title: Can AI Become Walter Cronkite? Testing the Machine Heuristic, the Hostile Media Effect, and Political News Written by Artificial Intelligence
※ Journal: Digital Journalism
※ Link: https://doi.org/10.1080/21670811.2024.2323000
※ Research Portal(Pure): https://pure.skku.edu/en/persons/joo-wha-hong/