How Do People Perceive AI-Generated Content? Surprising Results from a New Study
In the experiment, participants were shown answers to popular questions covering a wide range of topics—from politics to programming. These responses were either written by humans or generated by large language models (LLMs), including ChatGPT, Claude, and others. Some participants were told who authored the response; others were not. The questions were real, sourced from platforms like Stack Overflow, and the responses were presented both with and without source attribution. Participant feedback was collected via the Prolific platform.
The key finding? When people didn’t know who wrote the text, they were more likely to prefer answers generated by AI. But once they were told that the content was AI-generated, preferences shifted significantly toward human-written responses. This points to a strong bias against machine-generated content, even when the quality is equal to—or better than—that of a human author.
The researchers note that this bias is especially pronounced among people with technical backgrounds—particularly software developers. They also observed differences based on gender and age; for example, women were more likely to prefer human-written texts.
The study raises important questions about transparency, trust, and how society engages with machine intelligence. While LLMs are increasingly capable—sometimes even outperforming human writers—deep-rooted skepticism about non-human authorship remains a significant social hurdle.