Bridget Phillipson: Universities must avoid AI 'arms race' as teaching adapts

Bridget Phillipson — universities face urgent AI choices; Kortext director Rachel Maxwell says institution-level policies won’t be enough and warns against an AI 'arms race'.

Published
4 Min Read
Controlled experimentation with AI – the power of a pilot
Advertisement

, director of sector engagement and academic research at , told higher education leaders that institution-level policies are unlikely to be the decisive factor in helping lecturers adapt teaching for artificial intelligence.

"In , we set out to find out what higher education institutions are doing to support and develop educators to navigate the opportunities and challenges that AI brings in its wake," Maxwell said, summarising the project’s purpose and the evidence it has gathered about how campuses are responding.

The report finds wide variation across campuses: many institutions are rolling out AI literacy and leadership work, and individual practice is shifting, but attitudes and confidence in AI — and in some cases in pedagogy itself — vary dramatically. Maxwell warned that that patchwork of confidence and practice means centralised, one-size-fits-all policies are unlikely to change classroom practice on their own. "Institution-level policies or systematic development interventions that take academics out of their disciplinary context most likely aren’t going to be the critical factor in supporting educators to develop their teaching practice," she said.

- Advertisement -

That judgment matters now because universities are not only talking about guidance and training; they are making procurement and operational decisions that will shape learning and assessment for years. The report underlines that existing core software will now have some kind of AI capability, and that some core institutional systems may be transformed through AI-enablement or made more effective and scalable. Those shifts force institutions to make controversial choices about what is legitimised within their learning and teaching environment, including which forms of technology to support or mandate.

Maxwell lays out the practical upside of current institutional work: many of the literacy and leadership initiatives help create a shared understanding of the AI challenge, including processes and professional practices that individuals are already using AI to enable. Such initiatives can provide core ethical and professional guardrails for enthusiasts to experiment safely, and they create space to explore thorny institutional questions — most notably how to manage assessment when students and staff can use AI tools in many different ways.

The tension in the report is stark. On the one hand, individual academic practice evolves in the context of an institutional learning and teaching environment; on the other, Maxwell says the desired future state is for individual academics and course teams to come to their own informed and critical decisions about how AI changes their discipline and the implications for pedagogy and curriculum. "The wide variation in attitudes to and confidence in, not just AI, but to some extent, pedagogy, makes achieving this outcome across the board quite tricky for education leaders to execute," she said.

That gap — between institutional ambition and the reality of disciplinary teaching — creates the most immediate risk. Decisions about what technology to procure, adopt, or mandate need to be informed by a critical perspective on AI, the report says, because a blanket push to be AI-enabled across every function is unlikely to achieve much in the long run. "The goal should not be a terrifying AI ‘arms race,’ as a leader at one small institution put it, to adopt every possible bit of AI tech available until the money runs out," Maxwell cautioned.

The report’s framing is deliberately cautious: the exact impact of AI is still unknown and emergent. That uncertainty steers its central prescription — not an endless procurement sprint, but selective, evidence-informed adoption where AI is useful or unavoidable, paired with local professional judgement about assessment, curriculum and pedagogy.

- Advertisement -

What happens next is a practical test of that prescription. Institutions will continue to roll out literacy and leadership programmes that create shared understandings and guardrails, while academics and course teams must be empowered to translate those frameworks into disciplinary practice. Procurement choices will reveal whether universities choose a measured, critical approach or slide toward broad AI enablement that may yield little instructional gain and pose hard questions about assessment legitimacy.

The report leaves little doubt about the direction Maxwell favours: steer away from an arms race, invest in shared literacy and ethical guardrails, and trust academics to make nuanced, discipline-specific decisions about when and how to use AI in teaching. That combination, the Kortext research argues, is the most likely path to a sustainable learning and teaching environment as AI’s precise effects continue to emerge.

Advertisement
TAGGED:
Share This Article