Navigating AI Governance: Insights from Helen A. Hayes at McGill University
Helen A. Hayes, the Associate Director (Policy) of the Center for Media, Technology, and Democracy at McGill University, has made significant contributions to our understanding of technology’s role in society. Her work primarily delves into the profound shifts in how we govern digital technologies, particularly with the rise of artificial intelligence (AI) systems. As we explore this evolving landscape, it’s crucial to consider how these changes impact our lives, especially for younger generations.
The Evolution of Digital Governance
Traditionally, technologies were seen as mere vessels for carrying and transmitting information. This perspective has dominated digital regulation for decades. However, recent developments have shifted the paradigm: we are now governing systems that actively engage in forming relationships. This transformation raises essential questions about the safety and implications of AI products, particularly for minors.
AI Chatbots: More Than Just Tools
In today’s digital ecosystem, AI chatbots are no longer sidelined as peripheral tools. They are increasingly marketed as tutors, assistants, and companions, presenting a comforting, authoritative presence to young users. With their constant availability, these chatbots have become integral to the daily lives of many young people.
This shift—from chatbots as mere information systems to relational entities—creates a governance rupture. Current frameworks struggle to address the implications of this change effectively. It raises concerns about how we define safety and risk in the context of AI interactions. Until we adapt our governance models to recognize chatbots as relationship-driven systems, we risk overlooking significant, long-lasting impacts on users.
Insights from Gen(Z)AI: The National Citizens Assembly
Hayes co-leads a groundbreaking initiative called Gen(Z)AI, a national citizens assembly. This collaboration between Simon Fraser University’s Dialogue on Technology Project and the Center for Media, Technology, and Democracy engages young people aged 17 to 23 to discuss AI governance. The assembly has unveiled critical insights about how young individuals perceive chatbots.
Key Risks Identified by Young Participants
-
Disruption of Human Connections: Many participants noted that emotionally responsive AI systems are displacing genuine human interactions. This gradual shift has implications for relationships, comfort, and intimacy, moving them from individuals to algorithms designed for emotional resonance.
-
Cognitive Offloading: Chatbots risk eroding critical thinking and reflective learning, as their assistance becomes increasingly seamless and invisible in educational environments. Over time, this reliance on AI for cognitive tasks could undermine young people’s intellectual engagement.
- Exposure to Harmful Content: Participants expressed concern about young users being inadvertently exposed to harmful material, including content related to self-harm and suicide.
Addressing the Regulatory Gap
Canada’s current online harm frameworks are rooted in content-focused protections, privacy laws centered around consent, and liability regimes that treat platforms as passive distributors of information. However, AI chatbots require a different approach; they actively shape interactions and emotions.
To effectively regulate these systems, we need to ask a fundamental question: Are AI chatbots safe for young users? Unfortunately, existing regulations do not mandate developers to pause and consider this question before deployment.
Global Shifts in AI Governance
Other jurisdictions are starting to take these issues seriously. The EU is moving towards systemic risk assessments, while Australia is categorizing AI companions as high-risk technologies, enforcing safety regulations. In the U.S., age-appropriate design models are gaining traction despite industry pushback. These examples indicate a growing recognition of the need for robust governance strategies tailored for relational AI systems.
A Call for Canadian Action
To align Canada’s regulatory framework with the realities of relational AI, several immediate actions are essential:
-
Implement Safety-by-Design Obligations: Regulations must prioritize the emotional manipulation and engagement optimization that AI systems inherently possess.
-
Establish Institutional Oversight: There is a need for oversight mechanisms capable of assessing AI chatbots before they cause harm, rather than reacting post-factum.
- Embed Youth Participation in Governance: A structured role for youth in governance can ensure their voices are heard and that regulations address their experiences directly.
The Future of AI Governance
The advent of conversational AI challenges conventional notions of responsibility in technology. How we navigate issues of care, intimacy, and the emotional impact of AI systems on young people’s lives is critical. The characteristics of AI—its availability, persistence, and emotional fluency—carry real social weight, necessitating a nuanced approach to governance.
As Canada steps into this space, establishing a standard for AI governance grounded in lived experience is crucial. This task is not merely a challenge but an opportunity to shape systems that resonate with the realities of everyday life. The journey ahead calls for thoughtful engagement and decisive action, ensuring technology serves the public good.
Inspired by: Source

