top of page

Algorithm aversion revisited: The role of AI literacy and attitudes towards AI in shaping perceptions of AI- generated texts

Updated: 6 days ago

The study examines whether people discount AI-generated educational texts even when their quality is comparable to human writing. In an online experiment with 222 participants, the researchers used ChatGPT to produce short informational passages similar to common search results. Participants were then misled about the origin of each passage and told that some were written by humans, some by AI, and some through human–AI collaboration, before rating credibility, usefulness, and wording quality. A small but reliable preference for supposedly human texts appeared, yet evaluations were positive in every condition, with averages around 4.3 to 4.6 on a six-point scale. AI literacy mattered little, while general attitudes toward AI largely explained differences in trust and perceived helpfulness. The findings suggest that how materials are framed and how learners reflect on AI use may shape acceptance more than technical quality alone.



Since the early 2000s, a phenomenon known as algorithm aversion has been documented. It describes the human tendency to trust predictions generated by computer algorithms less than human predictions, regardless of their actual quality. With the introduction of large language models such as ChatGPT, this tendency has gained relevance for the education sector as well. If algorithm aversion entails an anti-AI bias, whereby AI-generated educational content is perceived as less trustworthy and helpful, this would have a direct impact on the use of such content in educational contexts. The consequences are difficult to predict, but would most likely include lower acceptance of AI-generated learning materials, which in turn would limit the scalability of pilot projects of this kind.


In a recently published study, we investigated this effect in an online experiment. Using ChatGPT, we generated short informational texts addressing typical search queries. Participants were then asked to rate these texts with regard to credibility, usefulness, and quality of wording. However, participants were (wrongfully) informed that one third of the texts had been created by a human, one third by an AI, and one third through a hybrid collaboration between a human and an AI. In addition to these evaluations, participants assessed their baseline competencies related to AI, referred to as AI literacy, as well as their attitudes toward AI.


We found a weak algorithm aversion effect, as content purportedly generated by humans was rated significantly more favorably. However, this difference is likely to play a relatively minor role in educational practice, as we found that both purportedly human-generated and AI-generated content received very positive evaluations overall. On a scale from 1 to 6, the mean ratings across all conditions ranged between 4.32 and 4.60. One possible reason for this low variance was that we enabled a direct comparison, allowing participants to see for themselves that the information was of comparable quality.


While participants’ AI literacy had only a minor influence on algorithm aversion, attitudes toward AI can be considered an important mediator. This is intuitively plausible, as individuals who hold generally negative attitudes toward AI tend to trust AI-generated educational content less and perceive it as less helpful.


Several relevant implications for educational policy and pedagogical decision-making regarding the use of AI in education can be derived from these findings. First, this and many other studies clearly show that learners generally evaluate AI-generated educational content quite positively. Even if students show a slight preference for human-generated content, this effect is not strong enough to warrant discouraging the use of generative AI in education. As noted above, our study employed a direct comparison between purportedly AI- and human-generated content. In studies using separate experimental groups, in which one group receives purportedly AI-generated content and another receives human-generated content, algorithm aversion tends to be more pronounced. For the use of AI in schools and universities, this may imply that learners should ideally receive teacher-generated material supplemented with AI-generated material. This approach allows learners to directly observe that the quality of AI-generated material is similarly high.


The final essential point concerns learners’ individual attitudes and competencies related to AI. Students should have opportunities to proactively engage with the advantages and disadvantages of AI use. Ideally, this engagement takes place in a reflective manner within the classroom. In this way, it is possible to avoid the promotion of either uncritical trust in or categorical rejection of AI, for example as a result of social media influences.


In summary, the success or failure of AI use in educational contexts depends not only on technical limitations, but also on the framing of AI-generated materials and on learners’ attitudes. While framing can be directly influenced, educational institutions can at least indirectly promote a critically reflective acceptance of AI-generated educational content through targeted efforts to build competencies.




Comments


  • LinkedIn
  • RSS

 

© 2035 by Pragma Learning Institute.

 

bottom of page