graph TD
classDef Amber fill:#FFDEEF;
soa(Sense of Agency among Youth)
soc(Societal Beliefs) --> AGENCY
ideology(Technocratic Ideologies) -. drive .-> tec
tec(Technological Innovations)
pers(Personal Beliefs about AI) -. shape .->AGENCY
tec -. influence .-> AGENCY
subgraph AGENCY["Sense of Agency"]
direction TB
Gid(Giddens: Social Structure) --> soa
Ban(Bandura: Individual) --> soa
end
class AGENCY Amber;
Introduction
Young people around the globe are growing up in a different digital reality than their parents, where Artificial Intelligence (AI) is playing a big part. After the “digital natives” of the early 2000’s and the debates for evidence that entailed (Bennett, Maton, and Kervin 2008), one could argue that we are moving towards a generation of “AI natives” (Ponce Rojo et al. 2025). A new group of adolescents that spend almost 7 hours and 30 minutes online per day (Kemp 2025), on platforms where AI is embedded, e.g. in search algorithms, AI answers, or chat functions often signified with “AI” or ✨.
Over the last decade the question whether Artificial Intelligence (AI) itself exhibits agency has been debated (Legaspi, He, and Toyoizumi 2019; Swanepoel 2021). This distinction between human and technological agency becomes more blurred with the emerging paradigm of Agentic AI, which is different to previous forms of AI such as large language models (LLMs), because it can operate (semi-)autonomously and pursue complex goals with little human intervention (Acharya, Kuppan, and Divya 2025).
This questions the nature of our human future and of our human agency (Anderson and Rainie 2023), as systems now also seem to (1) act with (2) intention and a certain kind of (3) reasoning, three aspects often associated with agency (Anscombe 1956; Davidson 1978). In Giddens (1984)’ structuralist definition of human agency, placed within society, it is the capability to “make a difference”, in other words to have and be able to exercise power over a situation in a certain context or structure (Giddens 1984).
If Human-like AI (HLAI) or autonomous Artificial General Intelligence is really in the near future, like some believe (Voss and Jovanovic 2023; Qureshi et al. 2025), this could have a profound impact on the way our economy is structured and what kind of work is performed by humans and what is performed solely by machines. The already great concentration in economic and political power by technologists is expected to only increase if we focus on replacing human labour by automating tasks instead of augmenting human labour with AI, leading to a situation where those “without power have no way to improve their outcomes”, what Brynjolfsson (2022) calls the Turing Trap.
A situation where power concentrates among the lucky few that are already econonomically and politically powerful and have influence on how HLAI can be deployed is very unlikely to yield larger autonomy, freedom and agency for the majority with marginal ecomonic and political power.
How then do we avoid the Turing Trap for our young people, who form half of the world’s population (United Nations 2024), and increase their capability to “make a difference”?
Gap
In the course of history, human development has been argued to increase where agency and freedom increase (Prados de la Escosura 2022; Sen 1999). Agency is however a bloated term that can mean a number of things in different domains, such as psychology where it focuses on individual human agency (Bandura 2001), sociology where it focuses on the structures within an actor operates (Giddens 1984). Agency is a term closely related to and with overlaps to terms such as autonomy and freedom. To be clear in this context “agency” means for an actor to be free to act true to that same actor’s intentions.
This in itself is impossible to measure, being dependent on so many factors. But some attempts have been made to measure how people perceive their sense of agency (Tapal, Oren, and Eitam 2017) or how Trust in Automation (TiA) influences there autonomy (Kohn et al. 2021). Also different models that propose “measures of agency” (Grünbaum and Christensen 2020) or how concepts like autonomy and ability contribute to agency or measuring proxies for agency (Alkire 2008).
Agentic AI and AI goals in general, largely seem to focus on increasing abilities for models or replacing tedious tasks performed by humans. While research has been done about future projections of human agency in conjunction with AI in decision making (Pew Research Center, Anderson, and Rainie 2023) and in education (Mouta, Pinto-Llorente, and Torrecilla-Sánchez 2025) it does not deal with how to increase human agency for humans now. Or what the effect is of Agentic AI on Sense of Agency (SoA).
The way Agentic AI or AI in general is influencing agency in adolescents seems not to have been studied, apart from the influence it has on critical thinking (Suriano et al. 2025), and why and how they use it for studying (Dai 2025; Silvennoinen, Aksovaara, and Alanko-Turunen 2025; Suonpää, Heikkilä, and Dimkar 2024).
While the Human AI - Interaction framework suggested by Sundar (2020) incorporates agency, it does not investigate the different angles it proposes as interesting.
One area of study related to increasing human agency surrounding AI is AI Literacies. Learning about AI systems explains how they perform, this demystifies and increases people’s insight in these systems (Pinski and Benlian 2024) which helps to reduce anthropomorphism (Druga and Ko 2021). Anthropomorphism is one of the things that can lead to false perceptions and misunderstanding of AI abilities(Barrow 2024). AI Literacies among adolescents has not been studied in relation to their human agency.
In goal attainment and AI assisted coaching, there is a real opportunity for an AI to participate in the process this then is suggested to increase agency (Plotkina and Sri Ramalu 2024). A literature review collecting these kinds of studies, that are influential to agency is however hard to find.
Hook
If we want to create circumstances for technology where humans can thrive we need to be able to accept a pluralistic view of technology. As technology is never just neutral nor is it inherently good or bad (Morrow 2014; Heyndels 2023). Regulation and policy fueled by ideology (both benefiting the few or the many) has been the driver for the direction this advancement would take us (Johnson and Acemoglu 2023, 57).
(Feenberg 10 paradoxes of technology, equal and opposed reaction, of Adorno Instrumental Rationality)
It is therefore imperative to understand how and why agency is being threatened and how agency can be increased in the context of Agentic AI. That firstly means understanding how we can assess agency is being influenced. I propose using and translating the Sense of Agency Scale (Tapal, Oren, and Eitam 2017) to the context of Agentic AI. Secondly what kind of influence AI Literacy can have on understanding Agentic AI.
Research Questions
How is technological ideology, which manifests itself as Agentic AI, impacting capabilities for adolescent human agency and how can its negative impacts be mitigated?
Sub-questions
- What influence is Agentic AI or AI in general having on proxies of agency or related concepts?
- How are adolescents interacting with agentic AI and how is this affecting the execution of day-to-day tasks (e.g. study, work, household chores)?
- How do adolescents rate their sense of agency on the Scale of Sense of Agency (Tapal, Oren, and Eitam 2017)?
- In what ways can understanding of these phenomena enhance their capabilities?
Methodology
In the following section I will elaborate on how I will conduct the research described in the above mentioned research questions.
| Question | Method | |
|---|---|---|
| RQ1 | What influence is Agentic AI or AI in general having on proxies of agency or related concepts? | Systematic Literature Review |
| RQ2 | How are adolescents interacting with agentic AI and how is this affecting the execution of day-to-day tasks (e.g. study, work, household chores)? | Mixed-method |
| RQ3 | How do adolescents rate their sense of agency on the Scale of Sense of Agency (Tapal, Oren, and Eitam 2017)? | Survey |
| RQ4 | In what ways can understanding of these phenomena enhance their capabilities? | Intervention |
RQ1: Systematic Literature Review
To find out what studies related to agency a systematic literature will be conducted that searches for studies that relate to agency, such as studies that focus on empowerment (kongDevelopingValidatingScale2025?). According to the systems of Grünbaum and Christensen (2020) and Alkire (2008) when studies are performed on proxies of agency, or related concepts, they can be categorized and labeled in relation to AI or more specifically to Agentic AI if those are available.
RQ2: Mixed-method
Thematic analysis will be used as Naeem et al. (2023) and Braun and Clarke (2021) describe it to find common themes and topics in personal qualitative interview with adolescents. Thematic analysis is a more subjective approach to data, where language is central. It requires a coding process that uses semantic and latent codes to categorize statements and find themes and subthemes of meaning in texts.
After the initial analysis I will make a survey that can study the outcomes of the main themes with a larger group of adolescents quantitatively.
RQ3: ?
RQ4:
This is where an intervention like AI Literacies will take center stage to see if this can help understand Agentic AI better to find out if a better way to deal and cope with these technologies helps adolescents to use these more properly.
Theoretical Framework
Habermas (1971) describes the rationality of technology as ideological. His ideology that technological progress is inevitable and oppressive (Habermas 1971) is in direct opposition to the techno-optimistic hegemony of Silicon Valley that believes in Innovation at all costs with determinist and positivist view of its results, which Winner (2018) calls “Cult of Innovation”. Resulting in an almost religious approach to AI and AGI among others (Epstein 2024).
The acceptance of society of this belief in progress by technological means influences and steers decision-making. Feenberg (2009) social factors and technical override technological deternimism. We do not have to blindly follow technology where it leads. There is no inevitability for the course technology takes society.