Ai on Campus: As policy divides emerge, skills development becomes the inflection point

Ai on Campus: As policy divides emerge, skills development becomes the inflection point

ai is the pivot around which Canadian universities are shaping both curriculum and policy right now. Campus leaders and faculty are pushing students to become literate in these tools, while unions and instructors warn that unclear rules could hollow out labour and learning. That tension marks a clear inflection point for higher education.

Why is this moment a turning point?

Gabriel Miller, president and CEO of Universities Canada, framed the current moment as one in which campuses are not only developing ai technology but teaching students to use it responsibly. Miller emphasized that universities are exposing students to tools they will need to be work-ready, while also preserving spaces for traditional critical thinking and human skills. He highlighted a push among some instructors to return to pen-and-paper exams to ensure those human skills remain central.

At the same time, a parallel policy debate is unfolding between institutions and labour groups. The University of Ottawa hosted a symposium that positioned generative ai as a literacy challenge rather than an existential threat. Carleton University has issued guidance encouraging instructors to specify how generative ai may be used in courses, linking practices to academic integrity, data protection and environmental considerations. But unionized academic workers, including CUPE 2626 and CUPE 4600, have raised alarms that vague policies could enable reductions in teaching assistant hours, automated grading and deeper casualization of academic work.

What Happens When Ai Policy Diverges on Campus?

The divide between institution-led literacy approaches and union concerns about labour impacts is already shaping classroom practice. David Knox, a professor in the University of Ottawa’s School of Engineering Design and Teaching Innovation, described a progression some campuses followed: initial bans on generative ai, then more granular calls to identify what constitutes academic misconduct, and ultimately devolved responsibility to individual instructors. Knox warned that uneven adoption can polarize outcomes in classes: stronger students may use ai to deepen understanding while weaker students may lean on copy-and-paste outputs that short-circuit learning. That dynamic risks widening existing gaps unless policies explicitly protect both learning quality and labour standards.

How universities are developing ai skills — scenarios and practical takeaways

  • Best case: Campuses integrate ai literacy into curricula while safeguarding assessment methods that preserve critical thinking. Universities deepen industry links and public infrastructure investment, enabling students to translate research into entrepreneurial activity and work-ready skills.
  • Most likely: A decentralized model persists. Instructors set course-level rules, creating patchwork practices across campuses. Students gain exposure to tools, but uneven policy and varying labour protections produce mixed outcomes for learning and employment.
  • Most challenging: Vague or inconsistent policies accelerate casualization as tasks are automated without clear labour protections. Teaching assistants and contract instructors see hours and roles erode, while weaker students fall further behind if assessment regimes do not adapt.

Practical takeaways follow directly from the facts on the ground: maintain assessment spaces where human skills are tested; ensure students gain hands-on exposure to ai tools so they leave school literate and work-ready; strengthen ties between academic research and industry to retain commercial benefits; and negotiate clear, enforceable policies that protect labour while integrating new technologies.

The immediate policy choice facing campuses is not whether to engage with ai but how to do so in ways that advance learning, protect workers and translate research into economic opportunity. Stakeholders from Universities Canada to campus unions have signalled overlapping goals — better literacy, stronger industry links and clear labour protections — even as they disagree on implementation. The most constructive path will be collaborative: preserve critical-thinking assessments, provide structured ai exposure for students, and develop enforceable policies that prevent the casualization of academic work while enabling innovation and entrepreneurship. Watch how institutions reconcile those pressures; the outcome will determine whether campuses successfully build ai

Next