Back to Article
Avoiding the Turing Trap in an age of GenAI for increased human agency and freedom
Download Source

Avoiding the Turing Trap in an age of GenAI for increased human agency and freedom

Author
Affiliation

Frederik J van Deventer

HAN University of Applied Sciences

Keywords

human agency, agentic AI, CAIL, turing trap, AI Literacy

Introduction

Technological advances that herald and drive societal changes over the last 12,000 years have only benefited ordinary people when “landowning and religious elites were not dominant enough to impose their vision and extract all the surplus from the technologies” [17].

We are again on the precipice of a big transformation for our society with the wide adoption of Artificial Intelligence in all kinds of sectors and embedded in all kinds of software. The public has never adopted a technological platform so fast as it did with ChatGPT [25]. Heralding the Generative Artificial Intelligence (genAI) boom with the text-based models like ChatGPT, Claude and Gemini and image-based products such as Dall-E and Midjourney being available for the public.

Only a few years later we are slowly moving towards Agentic AI. Where the Large Language Models (LLMs) that make the likes of Claude and ChatGPT possible, are used to create (semi-)autonomous systems that “reason” and feedback into itself to perform the tasks or solutions that it proposes. This marks a paradigm shift in the use of the web, where not only humans shape its future [34]. Because it is so new rules, regulations or benchmarks are not yet in place and it is “hard to distinguish genuine advances from hype” [19].

Generative Artificial Intelligence (genAI), a specific form of Artificial Intelligence harnessing the power of Large Language Models to generate new output based on vast and widely generalized data sources, is rapidly transforming our lives. Ranging from reshaping how we work [18] and how we learn [28], to changing criminal behaviour [9] and even influencing democratization [5].

With it there is this sense that we are closer to the holy grail of AI: Artificial General Intelligence (AGI) or “human-level intelligence”, a “god-term” and even “devil-term”, where believers herald a utopia in which AGI is able to solve everything, and on the other hand doomsayers who believe it to become a catastrophe [14].

The kind of language and claims being made are echoes of the technology industry (over-)promising new techniques or technologies and packaging them as a panacea for all kinds of problems or General Purpose Tools, while also decreasing emphasis on a large variety of negative consequences that this technology also brings. For example the profound physical impact these new technologies are having on our environment [4], the security issues of models with jailbreak attempts [3], nudging political preferences[5], reducing the quality of democracy decision-making and the erosion of trust in legitimate institutions [24].

Myth and technology have a long history together. Collectively we adhere to the idea that with technology and change we also converge to a better reality, to progress, but believing this is a religious act itself [2] with more religious overtones in what [33] calls the “Cult of Innovation”. We have seen similar rational and “more-than-rational” truths with other technologies such as blockchain [11].

Most of the AI Literacy literature agree that AI Literacy education “empowers individuals to achieve humanistic outcomes” [29]. Understanding and testing limits of AI works helps reframing AI as being fallible but helpful as opposed to intelligent [7], thereby breaking through mythical beliefs and anthropomorphisation that surround and clouds attitudes towards AI [12].

Technology is not merely a neutral phenomenon [15, 26] with inherently positive outcomes; there is a great need for finding solutions that benefit us all.

Gap

We do not know how AI will reshape the future of our human development, but there are some guesses we can make and historical lessons we can take into account. In all the major jumps in technological advancement that brought about societal change: steam engine, electricity, communications, digital communication and now the more broadly available statistical models that “converse” with us through chat like ChatGPT, we have seen that regulation and policy fueled by ideology (both benefiting the few or the many) has been the driver for the direction this advancement would take us [17].

Much of the research regarding AI has been focussing on improving models [8, 35], effects on policy [20, 21], impact on education [6, 13, 31] on human interactions [16], and how it can improve self-advocacy [30] but not how it affects agency and freedom which both count as indicative of improving human development [32].

Hook

“Technological development has invested the powers that be with not only more efficient, better and more deadly instruments of coercion but with the instruments of persuasion decidedly more efficient than those hitherto used by the political bosses..” - [23]

We need to figure a way out of what [1] calls the “Turing Trap” where increasing technological and economic power create a concentration of political power and find solutions, patterns, systems, beyond policy where the benefits of transformational technology like agentic AI are available to everyone.

One of the ways that has been suggested to make AI benefit us all is to focus on AI Literacy [10].

Adolescents such as students, between 18-25 in higher education institutes are often already interacting with tools like ChatGPT [6], so this study primarily focuses on that age group.

Research Questions

How can adolescents achieve increased human agency and freedom in a world with agentic AI?

Sub-questions

  1. How are students interacting with agentic AI or GenAI?
  2. What kind of beliefs do adolescents have about AI and its capabilities in relation to their own capabilities?
  3. In what ways is AI Literacies affecting these beliefs?
  4. How is AI affecting the execution of day-to-day tasks (e.g. study, work, household chores)?

Methodology

References

1.
Brynjolfsson, E.: The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. Daedalus. 151, 2, 272–287 (2022). https://doi.org/10.1162/daed_a_01915.
2.
Burdett, M.S.: The religion of technology: Transhumanism and the myth of progress. Religion and Transhumanism: The Unknown Future of Human Enhancement. 131 (2014).
3.
Carlini, N. et al.: Extracting Training Data from Large Language Models, https://arxiv.org/abs/2012.07805, (2021). https://doi.org/10.48550/arXiv.2012.07805.
4.
Crawford, K.: The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press (2021).
5.
Cupać, J. et al.: Democratization in the age of artificial intelligence: Introduction to the special issue. Democratization. 31, 5, 899–921 (2024). https://doi.org/10.1080/13510347.2024.2338852.
6.
Dai, Y.: Why students use or not use generative AI: Student conceptions, concerns, and implications for engineering education. Digital Engineering. 4, 100019 (2025). https://doi.org/10.1016/j.dte.2024.100019.
7.
Druga, S., Ko, A.J.: How do children’s perceptions of machine intelligence change when training and coding smart programs? In: Interaction Design and Children. pp. 49–61 ACM, Athens Greece (2021). https://doi.org/10.1145/3459990.3460712.
8.
Du, W. et al.: Optimizing Temperature for Language Models with Multi-Sample Inference, https://arxiv.org/abs/2502.05234, (2025). https://doi.org/10.48550/arXiv.2502.05234.
9.
Ferrara, E.: GenAI against humanity: Nefarious applications of generative artificial intelligence and large language models. Journal of Computational Social Science. 7, 1, 549–569 (2024). https://doi.org/10.1007/s42001-024-00250-1.
10.
Firth-Butterfield, K. et al.: Without universal AI literacy, AI will fail us, (2022).
11.
Gloerich, I.: Reimagining the Truth Machine: Blockchain Imaginaries between the Rational and the More-than-Rational. Utrecht University (2025). https://doi.org/10.33540/2726.
12.
Guest, O. et al.: Against the Uncritical Adoption of ’AITechnologies in Academia, (2025). https://doi.org/10.5281/ZENODO.17065099.
13.
Han, X. et al.: The impact of GenAI on learning outcomes: A systematic review and meta-analysis of experimental studies. Educational Research Review. 48, 100714 (2025). https://doi.org/10.1016/j.edurev.2025.100714.
14.
Heaven, W.D.: How AGI became the most consequential conspiracy theory of our time, (2025).
15.
Heyndels, S.: Technology and Neutrality. Philosophy & Technology. 36, 4, 75 (2023). https://doi.org/10.1007/s13347-023-00672-1.
16.
Hou, I. et al.: ’All Roads Lead to ChatGPT’: How Generative AI is Eroding Social Interactions and Student Learning Communities. In: Proceedings of the 30th ACM Conference on Innovation and Technology in Computer Science Education V. 1. pp. 79–85 ACM, Nijmegen Netherlands (2025). https://doi.org/10.1145/3724363.3729024.
17.
Johnson, S., Acemoglu, D.: Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Hachette UK (2023).
18.
Joshi, S.: The transformative role of agentic GenAI in shaping workforce development and education in the US. Available at SSRN 5133376. (2025).
19.
Kapoor, S. et al.: AI Agents That Matter, (2024). https://doi.org/10.48550/ARXIV.2407.01502.
20.
Khanal, S. et al.: Why and how is the power of Big Tech increasing in the policy process? The case of generative AI. Policy and Society. 44, 1, 52–69 (2025). https://doi.org/10.1093/polsoc/puae012.
21.
Leslie, D., Perini, A.M.: Future Shock: Generative AI and the International AI Policy and Governance Crisis. Harvard Data Science Review. Special Issue 5, (2024). https://doi.org/10.1162/99608f92.88b4cc98.
22.
Li, H. et al.: Multi-step Jailbreaking Privacy Attacks on ChatGPT, https://arxiv.org/abs/2304.05197, (2023). https://doi.org/10.48550/arXiv.2304.05197.
23.
Mathur, A.B.: Technology and Freedom.
24.
Miklian, J., Hoelscher, K.: A new digital divide? Coder worldviews, the Slop economy,” and democracy in the age of AI.
25.
Milmo, D.: ChatGPT reaches 100 million users two months after launch. The Guardian. (2023).
26.
Morrow, D.R.: When Technologies Makes Good People Do Bad Things: Another Argument Against the Value-Neutrality of Technologies. Science and Engineering Ethics. 20, 2, 329–343 (2014). https://doi.org/10.1007/s11948-013-9464-1.
27.
Perez, F., Ribeiro, I.: Ignore Previous Prompt: Attack Techniques For Language Models, https://arxiv.org/abs/2211.09527, (2022). https://doi.org/10.48550/arXiv.2211.09527.
28.
Perifanou, M., Economides, A.A.: Collaborative Uses of GenAI Tools in Project-Based Learning. Education Sciences. 15, 3, 354 (2025). https://doi.org/10.3390/educsci15030354.
29.
Pinski, M., Benlian, A.: AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effects. Computers in Human Behavior: Artificial Humans. 2, 1, 100062 (2024). https://doi.org/10.1016/j.chbah.2024.100062.
30.
Register, Y., Ko, A.J.: Learning Machine Learning with Personal Data Helps Stakeholders Ground Advocacy Arguments in Model Mechanics. In: Proceedings of the 2020 ACM Conference on International Computing Education Research. pp. 67–78 ACM, Virtual Event New Zealand (2020). https://doi.org/10.1145/3372782.3406252.
31.
Udeh, C.G.: The role of generative AI in personalized learning for higher education. World Journal of Advanced Engineering Technology and Sciences. 14, 2, 205–207 (2025). https://doi.org/10.30574/wjaets.2025.14.2.0077.
32.
UNDP: Human development report 2025. UNDP (United Nations Development Programme). (2025).
33.
Winner, L.: The Cult of Innovation: Its Myths and Rituals. In: Subrahmanian, E. et al. (eds.) Engineering a Better Future: Interplay between Engineering, Social Sciences, and Innovation. pp. 61–73 Springer International Publishing, Cham (2018). https://doi.org/10.1007/978-3-319-91134-2_8.
34.
Yang, Y. et al.: Agentic Web: Weaving the Next Web with AI Agents, https://arxiv.org/abs/2507.21206, (2025). https://doi.org/10.48550/arXiv.2507.21206.
35.
Yao, J. et al.: CAReDiO: Enhancing Cultural Alignment of LLM via Representativeness and Distinctiveness Guided Data Optimization. (2025).