Introduction
Technological advances that herald and drive societal changes over the last 12,000 years have only benefited ordinary people when “landowning and religious elites were not dominant enough to impose their vision and extract all the surplus from the technologies” [17].
We are again on the precipice of a big transformation for our society with the wide adoption of Artificial Intelligence in all kinds of sectors and embedded in all kinds of software. The public has never adopted a technological platform so fast as it did with ChatGPT [25]. Heralding the Generative Artificial Intelligence (genAI) boom with the text-based models like ChatGPT, Claude and Gemini and image-based products such as Dall-E and Midjourney being available for the public.
Only a few years later we are slowly moving towards Agentic AI. Where the Large Language Models (LLMs) that make the likes of Claude and ChatGPT possible, are used to create (semi-)autonomous systems that “reason” and feedback into itself to perform the tasks or solutions that it proposes. This marks a paradigm shift in the use of the web, where not only humans shape its future [34]. Because it is so new rules, regulations or benchmarks are not yet in place and it is “hard to distinguish genuine advances from hype” [19].
Generative Artificial Intelligence (genAI), a specific form of Artificial Intelligence harnessing the power of Large Language Models to generate new output based on vast and widely generalized data sources, is rapidly transforming our lives. Ranging from reshaping how we work [18] and how we learn [28], to changing criminal behaviour [9] and even influencing democratization [5].
With it there is this sense that we are closer to the holy grail of AI: Artificial General Intelligence (AGI) or “human-level intelligence”, a “god-term” and even “devil-term”, where believers herald a utopia in which AGI is able to solve everything, and on the other hand doomsayers who believe it to become a catastrophe [14].
The kind of language and claims being made are echoes of the technology industry (over-)promising new techniques or technologies and packaging them as a panacea for all kinds of problems or General Purpose Tools, while also decreasing emphasis on a large variety of negative consequences that this technology also brings. For example the profound physical impact these new technologies are having on our environment [4], the security issues of models with jailbreak attempts [3], nudging political preferences[5], reducing the quality of democracy decision-making and the erosion of trust in legitimate institutions [24].
Myth and technology have a long history together. Collectively we adhere to the idea that with technology and change we also converge to a better reality, to progress, but believing this is a religious act itself [2] with more religious overtones in what [33] calls the “Cult of Innovation”. We have seen similar rational and “more-than-rational” truths with other technologies such as blockchain [11].
Most of the AI Literacy literature agree that AI Literacy education “empowers individuals to achieve humanistic outcomes” [29]. Understanding and testing limits of AI works helps reframing AI as being fallible but helpful as opposed to intelligent [7], thereby breaking through mythical beliefs and anthropomorphisation that surround and clouds attitudes towards AI [12].
Technology is not merely a neutral phenomenon [15, 26] with inherently positive outcomes; there is a great need for finding solutions that benefit us all.
Gap
We do not know how AI will reshape the future of our human development, but there are some guesses we can make and historical lessons we can take into account. In all the major jumps in technological advancement that brought about societal change: steam engine, electricity, communications, digital communication and now the more broadly available statistical models that “converse” with us through chat like ChatGPT, we have seen that regulation and policy fueled by ideology (both benefiting the few or the many) has been the driver for the direction this advancement would take us [17].
Much of the research regarding AI has been focussing on improving models [8, 35], effects on policy [20, 21], impact on education [6, 13, 31] on human interactions [16], and how it can improve self-advocacy [30] but not how it affects agency and freedom which both count as indicative of improving human development [32].
Hook
“Technological development has invested the powers that be with not only more efficient, better and more deadly instruments of coercion but with the instruments of persuasion decidedly more efficient than those hitherto used by the political bosses..” - [23]
We need to figure a way out of what [1] calls the “Turing Trap” where increasing technological and economic power create a concentration of political power and find solutions, patterns, systems, beyond policy where the benefits of transformational technology like agentic AI are available to everyone.
One of the ways that has been suggested to make AI benefit us all is to focus on AI Literacy [10].
Adolescents such as students, between 18-25 in higher education institutes are often already interacting with tools like ChatGPT [6], so this study primarily focuses on that age group.
Research Questions
How can adolescents achieve increased human agency and freedom in a world with agentic AI?
Sub-questions
- How are students interacting with agentic AI or GenAI?
- What kind of beliefs do adolescents have about AI and its capabilities in relation to their own capabilities?
- In what ways is AI Literacies affecting these beliefs?
- How is AI affecting the execution of day-to-day tasks (e.g. study, work, household chores)?