AI feedback: Moving beyond the hype to… | Routledge Open Research: Exploring the Nuances of AI in Education
The Changing Landscape of AI in Lifelong Learning
The current surge of interest in Artificial Intelligence (AI) has brought renewed attention to the importance of lifelong learning (LLL) in preparing the workforce and ensuring ethical and responsible use of AI for society now and in the future (Schiff 2021). This interest in lifelong learning, where education gains prominence due to technical assumptions about the latest networked technology and its implications for social change, has been a focus in policy and practice since the end of the last century (e.g. Gorard and Selwyn 1999; Selwyn 2012).
In many such policies, technology is seen both as the motivator for the need for LLL and an important way to enable learning to occur. Governments now provide LLL grants to individuals; workplaces pay for technical systems to help with training needs; and outside the workplace, individuals learn via an array of digital platforms that vary in their design for learning (e.g. YouTube, TikTok, Coursera) and the cost to the individual in terms of their data or finances (Eynon and Malmberg 2021).
However, while lifelong learning can be a positive force for personal growth and societal wellbeing, over the past few decades, scholars have critiqued the narrowing of lifelong learning provision in the contexts and topics of learning, with a focus primarily on the needs of the economy, risking the possibilities for personal and democratic learning opportunities (Bélanger 2016; Biesta 2006; James 2020; Jarvis 2007). This continued narrowing of policy and practice is present in current debates around AI and lifelong learning. A trend that has been further exacerbated by the significant role of the commercial sector in developing AI for LLL (Eynon and Young 2021); and the ways that learning opportunities may also be directly oriented around short-term perspectives built into machine learning strategies to identify skills gaps (e.g. Gonzalez Ehrlinger and Stephany 2023).
Alongside this narrowing of knowledge is an instrumentalist view of lifelong learning with AI that is commonplace in policy and public discourse. This view tends to focus on individual skill acquisition as the primary mode of how learning takes place, where technology is a neutral tool to deliver these skills in an efficient way (e.g. Schiff 2021). This situation mirrors longstanding research agendas in education with an instrumentalist focus on how technology serves to ‘enhance’ learning (Bayne 2015; Jandrić and Knox 2022); and a tendency to theorize learning but not technology (Oliver 2016). This view not only ignores the complex ways that learning and technology can be theorized, but also elides the role of social context in learning, including in situations where power differentials exist (Jarvis 2007).
In sum, a large proportion of discourse in policy and practice tends to talk about lifelong learning with AI as a panacea, where learning is inevitable, technology is rarely theorized and the relations between learning and technology are largely ignored. Overall, the discourse around AI’s role in society is increasingly mobilized by those who would profit from its spread (Suchman 2023). These are not new trends and are mirrored in a large proportion of the research about education and technology (Oliver 2016).
Theorizing AI and Lifelong Learning
To better understand the relationships between AI and lifelong learning, it is important to critically examine how academic communities conceptualize these two elements. Based on a thematic review of existing academic literature, we identified three groups of research that vary in their engagement with theories of learning and AI technology.
Group 1: Working AI
The first group of research, the largest in our review, views AI from the perspective of skill acquisition while describing learning from a primarily behaviorist or cognitivist perspective. This research implicitly, and at times explicitly, reinforces narratives of technological determinism that have long played a role in fostering the accelerated introduction of and acclimation to new technology (Robins and Webster 1985). Much of this work focuses on the deployment of intelligent tutoring systems (ITS), often carried out by researchers in engineering, computer science, business studies, or the learning sciences. In these instances, the ITS acts as a substitute for humans performing specific tasks, including the ‘routine’ aspects of teaching, which has the stated effect of freeing the human teacher to focus on more challenging or creative endeavors. However, this effect goes unsupported by empirical data.
The research in this group is also often justified by the need to respond to the technologization of society in light of the current ‘digital transformation’ (Johnson 2021) and the ‘future of work’ (Adami et al. 2021; Harborth and Kümpers 2022; Selby et al. 2021). A key focus is the role of AI in increasing efficiency, whether increasing the efficiency of teaching as compared to human instructors or increasing the efficiency of learning certain tasks (Crampes et al. 2000; Gronseth and Hutchins 2020; Johnson and Lester 2018; Kowald and Bruns 2021; Sabeima et al. 2022; Srinivasan et al. 2006). Theoretically, these approaches to learning also rely on behaviorist and cognitivist conceptualizations of learning, which have an impact on understandings of technology.
Group 2: Working with AI
The second group of research, the second largest in our review, focuses on sociocultural theories of learning, with a recognition of the need to consider social context and other actors in learning. This role-based consideration of AI recognizes the changing nature of digital technology in people’s lives. Several papers from this group make use of the idea of Wenger’s Communities of Practice (CoP), including Dobson et al. (2001); Greer et al. (1998); Koren and Klamma (2018); and Poquet and Laat (2021). This represents an explicit departure from narratives of technology and technological artifacts as the cause of changes in behavior to an understanding that interactions with technology are the result of social practice.
There is also an increased tendency in discursively constructing AI as a colleague (or actor) within systems, described variously as a ‘personal learning aid,’ ‘personal assistant,’ or ‘learning collaborator,’ providing just-in-time feedback to users (Emmenegger et al. 2016; Greer et al. 1998; Poquet and Laat 2021). This conceptualization is repeated throughout the papers part of CSCW/CSCL research paradigms, as much of the AI is understood in terms of ‘agents’ (Holden et al. 2005).
However, while this group of research renders the context more complex, the focus is still on how individuals engage with those agents, and the effect of those agents on individuals. There is little attention to how these agents are introduced to the workplace. Additionally, there is a lingering and latent technological determinism, as well as technical solutions—whether a ‘personal data vault’ or the ability to open the ‘black box’ of AI (Kay 2016)—to the marked complexity of deploying AI in workplaces.
Group 3: Reconfiguring AI
The final category of research, the smallest in our review, explicitly rejects what they perceive to be deterministic perspectives of technology in the workplace and beyond. The theoretical approaches include ANT (Bozkurt et al. 2018), (critical) posthumanism (Bozkurt et al. 2018; Jandrić and Hayes 2020) and sociomateriality (Willems and Hafermalz 2021). These studies notably highlight the need to consider AI from a sociotechnical perspective, where the implementation of AI is one actor among others, including institutional regimes, management ideologies, pre-existing work practices, and the particular choices made in a local setting.
This group of research also directly addresses lifelong learning, outlining learning futures that are not confined to the workplace. Most of these papers’ authors advocate for a more expansive view of learning, where the home/work binary may no longer be productive, as technology structures life differently. The effects of technology must also be considered at the personal, group, and systemic levels (Dzubinski et al. 2012). This approach to research resists the need to anticipate every new technology, avoiding ‘mastery of the future,’ in which lifelong learning is subject to never-ending signals from the market (Edwards 2010).
Implications and the Path Forward
The categorization of existing research highlights several important areas of focus for future research addressing the role of AI in the context of lifelong learning.
First, there is a need for closer academic scrutiny of claims that AI straightforwardly leads to increased efficiency. Research must continue to explore the efficacy of AI where it is deployed, whether in the workplace or beyond, and move beyond narrow understandings of ‘what works.’ This involves understanding intended effects of the deployment of AI in society while also asking other questions, specifically around secondary or unintended effects (Oliver 2016).
Second, there is a need to deepen our understanding of who works and how they work. The focus on the individual is highlighted to an even greater degree when researchers make use of Knowles’ theory of andragogy, which is notably focused on individual motivation and self-direction. This has important implications for understandings of technology in research: AI can act on individuals, act as individuals, or augment the tasks of individuals, rendering them more efficient. There is a need to shift towards more relational, postdigital perspectives on learning and on research.
Third, technology and learning are under-theorized by the first two groups of research. Existing theories of learning may have their limits, as they can be used to assign terms to thinking that are derived from or consonant with the functions of a computer (Friesen 2010). Beyond theories of learning, the influence of the human capital approach to learning on much of the work in this space must also be challenged, as it implicitly touches on theories of technology that persist in treating AI as a ‘tool.’
Finally, there is a need to reframe research questions to avoid technological determinism. A report from Microsoft Research recommends a path forward: Instead of asking ‘How will AI affect work?’ we might ask, ‘How do we want AI to affect work?’ (Butler et al. 2023). In addition, there is a strong need for more participatory, deliberative approaches to the implementation of AI technology in the workplace and beyond.
Ultimately, the way forward requires a deeper reflection on the stories being told about AI, and in particular, considering the storyteller, genre and the communicative purpose of the story. Efforts to make it harder for those who benefit from the dominant narratives to influence them ought to be a focus for the research and technology community at large. By doing so, we can promote more realistic, nuanced, and inclusive stories about the role of AI in lifelong learning and beyond.
For the latest updates and resources on AI and education, visit the Stanley Park High School website.