Putting human agency at the forefront in agentic AI for adolescents

Author
Affiliation

Frederik J van Deventer

HAN University of Applied Sciences

Other Formats
Keywords

human agency, agentic AI, CAIL, turing trap, AI Literacy, belief

Introduction

Young people around the globe are growing up in a different digital reality than their parents, where Artificial Intelligence (AI) is playing a big part. After the “digital natives” of the early 2000’s and the debates for evidence that entailed (Bennett, Maton, and Kervin 2008), one could argue that we are moving towards a generation of “AI natives” (Ponce Rojo et al. 2025). A new group of adolescents that spend almost 7 hours and 30 minutes online per day (Kemp 2025), on platforms where AI is embedded, e.g. in search algorithms, AI answers, or chat functions often signified with “AI” or ✨.

Over the last decade the question whether Artificial Intelligence (AI) itself exhibits agency has been debated (Legaspi, He, and Toyoizumi 2019; Swanepoel 2021). This distinction between human and technological agency becomes more blurred with the emerging paradigm of Agentic AI, which is different to previous forms of AI such as large language models (LLMs), because it can operate (semi-)autonomously and pursue complex goals with little human intervention (Acharya, Kuppan, and Divya 2025).

This questions the nature of our human future and of our human agency (Anderson and Rainie 2023), as systems now also seem to (1) act with (2) intention and a certain kind of (3) reasoning, three aspects often associated with agency (Anscombe 1956; Davidson 1978). In Giddens (1984)’ structuralist definition of human agency, placed within society, it is the capability to “make a difference”, in other words to have and be able to exercise power over a situation in a certain context or structure (Giddens 1984).

If Human-like AI (HLAI) or autonomous Artificial General Intelligence is really in the near future, like some believe (Voss and Jovanovic 2023; Qureshi et al. 2025), this could have a profound impact on the way our economy is structured and what kind of work is performed by humans and what is performed solely by machines. The already great concentration in economic and political power by technologists is expected to only increase if we focus on replacing human labour by automating tasks instead of augmenting human labour with AI, leading to a situation where those “without power have no way to improve their outcomes”, what Brynjolfsson (2022) calls the Turing Trap.

A situation where power concentrates among the lucky few that are already econonomically and politically powerful and have influence on how HLAI can be deployed is very unlikely to yield larger autonomy, freedom and agency for the majority with marginal ecomonic and political power.

How then do we avoid the Turing Trap for our young people, who form half of the world’s population (United Nations 2024), and increase their capability to “make a difference”?

Gap

In the course of history, human development has been argued to increase where agency and freedom increase (Prados de la Escosura 2022; Sen 1999). Agency is however a bloated term that can mean a number of things in different domains, such as psychology where it focuses on individual human agency (Bandura 2001), sociology where it focuses on the structures within an actor operates (Giddens 1984). Agency is a term closely related to and with overlaps to terms such as autonomy and freedom. To be clear in this context “agency” means for an actor to be free to act true to that same actor’s intentions.

This in itself is impossible to measure, being dependent on so many factors. But some attempts have been made to measure how people perceive their sense of agency (Tapal, Oren, and Eitam 2017) or how Trust in Automation (TiA) influences there autonomy (Kohn et al. 2021). Also different models that propose “measures of agency” (Grünbaum and Christensen 2020) or how concepts like autonomy and ability contribute to agency or measuring proxies for agency (Alkire 2008).

Agentic AI and AI goals in general, largely seem to focus on increasing abilities for models or replacing tedious tasks performed by humans. While research has been done about future projections of human agency in conjunction with AI in decision making (Pew Research Center, Anderson, and Rainie 2023) and in education (Mouta, Pinto-Llorente, and Torrecilla-Sánchez 2025) it does not deal with how to increase human agency for humans now. Or what the effect is of Agentic AI on Sense of Agency (SoA).

The way Agentic AI or AI in general is influencing agency in adolescents seems not to have been studied, apart from the influence it has on critical thinking (Suriano et al. 2025), and why and how they use it for studying (Dai 2025; Silvennoinen, Aksovaara, and Alanko-Turunen 2025; Suonpää, Heikkilä, and Dimkar 2024).

While the Human AI - Interaction framework suggested by Sundar (2020) incorporates agency, it does not investigate the different angles it proposes as interesting.

One area of study related to increasing human agency surrounding AI is AI Literacies. Learning about AI systems explains how they perform, this demystifies and increases people’s insight in these systems (Pinski and Benlian 2024) which helps to reduce anthropomorphism (Druga and Ko 2021). Anthropomorphism is one of the things that can lead to false perceptions and misunderstanding of AI abilities(Barrow 2024). AI Literacies among adolescents has not been studied in relation to their human agency.

In goal attainment and AI assisted coaching, there is a real opportunity for an AI to participate in the process this then is suggested to increase agency (Plotkina and Sri Ramalu 2024). A literature review collecting these kinds of studies, that are influential to agency is however hard to find.

Hook

If we want to create circumstances for technology where humans can thrive we need to be able to accept a pluralistic view of technology. As technology is never just neutral nor is it inherently good or bad (Morrow 2014; Heyndels 2023). Regulation and policy fueled by ideology (both benefiting the few or the many) has been the driver for the direction this advancement would take us (Johnson and Acemoglu 2023, 57).

(Feenberg 10 paradoxes of technology, equal and opposed reaction, of Adorno Instrumental Rationality)

It is therefore imperative to understand how and why agency is being threatened and how agency can be increased in the context of Agentic AI. That firstly means understanding how we can assess agency is being influenced. I propose using and translating the Sense of Agency Scale (Tapal, Oren, and Eitam 2017) to the context of Agentic AI. Secondly what kind of influence AI Literacy can have on understanding Agentic AI.

Research Questions

How is technological ideology, which manifests itself as Agentic AI, impacting capabilities for adolescent human agency and how can its negative impacts be mitigated?

Sub-questions

  1. What influence is Agentic AI or AI in general having on proxies of agency or related concepts?
  2. How are adolescents interacting with agentic AI and how is this affecting the execution of day-to-day tasks (e.g. study, work, household chores)?
  3. How do adolescents rate their sense of agency on the Scale of Sense of Agency (Tapal, Oren, and Eitam 2017)?
  4. In what ways can understanding of these phenomena enhance their capabilities?

Methodology

In the following section I will elaborate on how I will conduct the research described in the above mentioned research questions.

Question Method
RQ1 What influence is Agentic AI or AI in general having on proxies of agency or related concepts? Systematic Literature Review
RQ2 How are adolescents interacting with agentic AI and how is this affecting the execution of day-to-day tasks (e.g. study, work, household chores)? Mixed-method
RQ3 How do adolescents rate their sense of agency on the Scale of Sense of Agency (Tapal, Oren, and Eitam 2017)? Survey
RQ4 In what ways can understanding of these phenomena enhance their capabilities? Intervention

RQ1: Systematic Literature Review

To find out what studies related to agency a systematic literature will be conducted that searches for studies that relate to agency, such as studies that focus on empowerment (kongDevelopingValidatingScale2025?). According to the systems of Grünbaum and Christensen (2020) and Alkire (2008) when studies are performed on proxies of agency, or related concepts, they can be categorized and labeled in relation to AI or more specifically to Agentic AI if those are available.

RQ2: Mixed-method

Thematic analysis will be used as Naeem et al. (2023) and Braun and Clarke (2021) describe it to find common themes and topics in personal qualitative interview with adolescents. Thematic analysis is a more subjective approach to data, where language is central. It requires a coding process that uses semantic and latent codes to categorize statements and find themes and subthemes of meaning in texts.

After the initial analysis I will make a survey that can study the outcomes of the main themes with a larger group of adolescents quantitatively.

RQ3: ?

RQ4:

This is where an intervention like AI Literacies will take center stage to see if this can help understand Agentic AI better to find out if a better way to deal and cope with these technologies helps adolescents to use these more properly.

Theoretical Framework

Habermas (1971) describes the rationality of technology as ideological. His ideology that technological progress is inevitable and oppressive (Habermas 1971) is in direct opposition to the techno-optimistic hegemony of Silicon Valley that believes in Innovation at all costs with determinist and positivist view of its results, which Winner (2018) calls “Cult of Innovation”. Resulting in an almost religious approach to AI and AGI among others (Epstein 2024).

The acceptance of society of this belief in progress by technological means influences and steers decision-making. Feenberg (2009) social factors and technical override technological deternimism. We do not have to blindly follow technology where it leads. There is no inevitability for the course technology takes society.

graph TD
classDef Amber fill:#FFDEEF;
    soa(Sense of Agency among Youth)
    soc(Societal Beliefs) --> AGENCY
    ideology(Technocratic Ideologies) -. drive .-> tec
    tec(Technological Innovations)
    pers(Personal Beliefs about AI) -. shape .->AGENCY
    tec -. influence .-> AGENCY

    subgraph AGENCY["Sense of Agency"]
    direction TB
    Gid(Giddens: Social Structure) --> soa
    Ban(Bandura: Individual) --> soa
    end

class AGENCY Amber;

Source: Article Notebook

References

Acharya, Deepak Bhaskar, Karthigeyan Kuppan, and B. Divya. 2025. “Agentic AI: Autonomous Intelligence for Complex Goals—a Comprehensive Survey.” IEEE Access : Practical Innovations, Open Solutions 13: 18912–36. https://doi.org/10.1109/ACCESS.2025.3532853.
Alkire, Sabina. 2008. “Concepts and Measures of Agency.”
Anderson, Janna, and Lee Rainie. 2023. “The Future of Human Agency.” Pew Research Center.
Anscombe, Gertrude Elizabeth Margaret. 1956. “Intention.” In Proceedings of the Aristotelian Society, 57:321–32. JSTOR.
Bandura, Albert. 2001. “Social Cognitive Theory: An Agentic Perspective.” Annual Review of Psychology 52 (1): 1–26.
Barrow, Nicholas. 2024. “Anthropomorphism and AI Hype.” AI and Ethics 4 (3): 707–11. https://doi.org/10.1007/s43681-024-00454-1.
Bennett, Sue, Karl Maton, and Lisa Kervin. 2008. “The ‘Digital Natives’ Debate: A Critical Review of the Evidence.” British Journal of Educational Technology 39 (5): 775–86. https://doi.org/10.1111/j.1467-8535.2007.00793.x.
Braun, Virginia, and Victoria Clarke. 2021. “Thematic Analysis: A Practical Guide to Understanding and Doing.” Thousand Oaks.
Brynjolfsson, Erik. 2022. “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence.” Daedalus 151 (2): 272–87. https://doi.org/10.1162/daed_a_01915.
Dai, Yun. 2025. “Why Students Use or Not Use Generative AI: Student Conceptions, Concerns, and Implications for Engineering Education.” Digital Engineering 4 (March): 100019. https://doi.org/10.1016/j.dte.2024.100019.
Davidson, Donald. 1978. “Intending.” In Philosophy of History and Action: Papers Presented at the First Jerusalem Philosophical Encounter December 1974, edited by Yirmiahu Yovel, 41–60. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94-009-9365-5_5.
Druga, Stefania, and Amy J Ko. 2021. “How Do Children’s Perceptions of Machine Intelligence Change When Training and Coding Smart Programs?” In Interaction Design and Children, 49–61. Athens Greece: ACM. https://doi.org/10.1145/3459990.3460712.
Epstein, The MIT Press. 2024. “Silicon Valley’s Obsession With AI Looks a Lot Like Religion.” The MIT Press Reader.
Feenberg, Andrew. 2009. “Technology, Power, and Freedom.” Readings in the Philosophy of Technology 139.
Giddens, Anthony. 1984. The Constitution of Society: Outline of the Theory of Structuration. Univ of California Press.
Grünbaum, Thor, and Mark Schram Christensen. 2020. “Measures of Agency.” Neuroscience of Consciousness 2020 (1): niaa019. https://doi.org/10.1093/nc/niaa019.
Habermas, Jürgen. 1971. “Technology and Science as ‘Ideology’.” Knowledge Critical Concepts 4.
Heyndels, Sybren. 2023. “Technology and Neutrality.” Philosophy & Technology 36 (4): 75. https://doi.org/10.1007/s13347-023-00672-1.
Johnson, Simon, and Daron Acemoglu. 2023. Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. Hachette UK.
Kemp, Simon. 2025. “Digital 2025: Global Overview Report.” DataReportal – Global Digital Insights. https://datareportal.com/reports/digital-2025-global-overview-report; Data Reportal.
Kohn, Spencer C., Ewart J. de Visser, Eva Wiese, Yi-Ching Lee, and Tyler H. Shaw. 2021. “Measurement of Trust in Automation: A Narrative Review and Reference Guide.” Frontiers in Psychology 12 (October). https://doi.org/10.3389/fpsyg.2021.604977.
Legaspi, Roberto, Zhengqi He, and Taro Toyoizumi. 2019. “Synthetic Agency: Sense of Agency in Artificial Intelligence.” Current Opinion in Behavioral Sciences, Artificial Intelligence, 29 (October): 84–90. https://doi.org/10.1016/j.cobeha.2019.04.004.
Morrow, David R. 2014. “When Technologies Makes Good People Do Bad Things: Another Argument Against the Value-Neutrality of Technologies.” Science and Engineering Ethics 20 (2): 329–43. https://doi.org/10.1007/s11948-013-9464-1.
Mouta, Ana, Ana María Pinto-Llorente, and Eva María Torrecilla-Sánchez. 2025. Where Is Agency Moving to?’: Exploring the Interplay Between AI Technologies in Education and Human Agency.” Digital Society 4 (2): 49. https://doi.org/10.1007/s44206-025-00203-9.
Naeem, Muhammad, Wilson Ozuem, Kerry Howell, and Silvia Ranfagni. 2023. “A Step-by-Step Process of Thematic Analysis to Develop a Conceptual Model in Qualitative Research.” International Journal of Qualitative Methods 22 (October): 16094069231205789. https://doi.org/10.1177/16094069231205789.
Pew Research Center, Janna Anderson, and Lee Rainie. 2023. “The Future of Human Agency,” February.
Pinski, Marc, and Alexander Benlian. 2024. AI Literacy for Users – A Comprehensive Review and Future Research Directions of Learning Methods, Components, and Effects.” Computers in Human Behavior: Artificial Humans 2 (1): 100062. https://doi.org/10.1016/j.chbah.2024.100062.
Plotkina, Lidia, and Subramaniam Sri Ramalu. 2024. “Unearthing AI Coaching Chatbots Capabilities for Professional Coaching: A Systematic Literature Review.” Journal of Management Development 43 (6): 833–48. https://doi.org/10.1108/JMD-06-2024-0182.
Ponce Rojo, Antonio, Tomás Fontaines-Ruiz, Amelia Sánchez Bracho, and Liliana Cánquiz Rincón. 2025. “From Digital Natives to AI Natives: Emerging Competencies and Media and Information Literacy in Higher Education.” Education Sciences 15 (9): 1134. https://doi.org/10.3390/educsci15091134.
Prados de la Escosura, Leandro. 2022. Human Development and the Path to Freedom: 1870 to the Present. New Approaches to Economic and Social History. Cambridge: Cambridge University Press.
Qureshi, Rizwan, Ranjan Sapkota, Abbas Shah, Amgad Muneer, Anas Zafar, Ashmal Vayani, Maged Shoman, et al. 2025. “Thinking Beyond Tokens: From Brain-Inspired Intelligence to Cognitive Foundations for Artificial General Intelligence and Its Societal Impact.” arXiv. https://doi.org/10.48550/arXiv.2507.00951.
Sen, Amartya. 1999. Development as Freedom. Oxford University Press.
Silvennoinen, Minna, Satu Aksovaara, and Merja Alanko-Turunen. 2025. “Students’ Usage of GenAI in Universities of Applied Sciences: Experiences and Development Needs for Guidance and Support.” In 38th Bled eConference: Empowering Transformation: Shaping Digital Futures for All: Conference Proceedings, 467–82. University of Maribor Press. https://doi.org/10.18690/um.fov.4.2025.29.
Sundar, S Shyam. 2020. “Rise of Machine Agency: A Framework for Studying the Psychology of HumanAI Interaction (HAII).” Journal of Computer-Mediated Communication 25 (1): 74–88. https://doi.org/10.1093/jcmc/zmz026.
Suonpää, Maija, Jutta Heikkilä, and Ana Dimkar. 2024. “Students’ Perceptions Of Generative Ai Usage And Risks In A Finnish Higher Education Institution.” In 18th International Technology, Education and Development Conference, 3071–77. Valencia, Spain. https://doi.org/10.21125/inted.2024.0825.
Suriano, Rossella, Alessio Plebe, Alessandro Acciai, and Rosa Angela Fabio. 2025. “Student Interaction with ChatGPT Can Promote Complex Critical Thinking Skills.” Learning and Instruction 95 (February): 102011. https://doi.org/10.1016/j.learninstruc.2024.102011.
Swanepoel, Danielle. 2021. “Does Artificial Intelligence Have Agency?” In The Mind-Technology Problem, edited by Robert W. Clowes, Klaus Gärtner, and Inês Hipólito, 18:83–104. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-72644-7_4.
Tapal, Adam, Ela Oren, and Baruch Eitam. 2017. “The Sense of Agency Scale: A Measure of Consciously Perceived Control over One’s Mind, Body, and the Immediate Environment.” Frontiers in Psychology 8 (September): 1552. https://doi.org/10.3389/fpsyg.2017.01552.
United Nations. 2024. “World Population Prospects.” https://population.un.org/wpp/graphs?loc=900&type=Probabilistic%20Projections&category=Population&subcategory=Age%200-24; United Nations.
Voss, Peter, and Mladjan Jovanovic. 2023. “Why We Don’t Have AGI Yet.” arXiv. https://doi.org/10.48550/arXiv.2308.03598.
Winner, Langdon. 2018. “The Cult of Innovation: Its Myths and Rituals.” In Engineering a Better Future: Interplay Between Engineering, Social Sciences, and Innovation, edited by Eswaran Subrahmanian, Toluwalogo Odumosu, and Jeffrey Y. Tsao, 61–73. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-91134-2_8.