Skip to main content
Home » Technology & Innovation » Future of AI 2025 » The State of AI Ethics Report Volume 7 
Future of AI

The State of AI Ethics Report Volume 7 

Sponsored by:
Sponsored by:

Connor Wright

Director of Partnerships, MAIEI

Ismael Kherroubi Garcia

Editorial Manager, MAIEI

Kei Baritugo

Director of Global Marketing Communications, MAIEI


Conversations about AI often focus on dollars and cents, moving from one set of billion-dollar investments in one industry to another. Rarely do conversations unpack how these investments affect people; their communities, livelihoods and ways of being. The question shouldn’t be “where next for the AI revolution?” but rather questions about how the introduction of AI into voice-over acting affects the daily work of these professionals. How are university students finding the transition into sharing the classroom with large language models (LLMs)? Why is AI inspiring medical trade union bodies to take back control and oversight over AI implications in healthcare? 

The Montreal AI Ethics Institute’s State of AI Ethics Report (SAIER) Volume 7, AI at the Crossroads: A Practitioner’s Guide to Community-Centered Solutions, provides insights into those questions, covering the sectoral applications of AI across a variety of industries, including entertainment, military use, governance, and more.  

The report highlights community-led solutions to the widening gap between those building AI systems and those impacted by them. In doing so, the SAIER examines this structural divide across 17 chapters containing 48 essays about real-world case studies and practitioner perspectives from 58 international contributors.  

While the SAIER does not represent all voices across all sectors impacted by AI, it serves as a backdrop on which to analyze evolving developments across the AI landscape. Drawing on experiences from Africa, Europe, Asia and North America, the report offers a rich and grounded approach to the different problems facing different parts of the world.  

Mapping Impacts 

One such problem is the environmental impact of AI. Priscila Chaves Martínez explains that practitioners inherit the whole of an AI system’s supply chain: extraction, fabrication, operation, and disposal. This impact should be measured to inform compute budgets and be communicated transparently.  

Regarding AI surveillance in public spaces, Roxana Akhmetova of the University of Oxford highlights how the source of the data on which algorithms are trained should be examined to investigate what human biases are being encoded into the automation.  

Within the education sector, Tamas Makany and Ivy Seow of Singapore Management University write about how professors must be careful that their students are not outsourcing their critical thinking to AI, combating this with classroom cultures that encourage curiosity and allow for mistakes as part of the learning process.  

On the ground in Canada 

Among the plethora of viewpoints within the report, Canadian voices feature prominently.  

On the topic of labour, Ryan Burns (University of Washington Bothell) and Eliot Tretter (University of Calgary) analyse AI’s impact on petroleum extraction in Alberta (Canada’s most prominent oil and gas producer), emphasizing the need for a just shift towards non-carbon fuel alternatives that minimizes worker impact.  

AI isn’t just about machines, models, or infrastructure. It’s about people.

Given the rise of deepfakes and AI actors, representatives from the Alliance of Canadian Cinema, Television and Radio Artists (ACTRA) share their recently agreed-upon AI protections revolving around consent, compensation, and control. Amanda Silvera, a Canadian voice-over actor, reinforces this approach through her protective framework that licenses and monitors AI’s use of someone’s likeness or voice. 

An Indigenous perspective 

It is crucial for the SAIER to include and acknowledge Indigenous perspectives, which MAIEI holds as essential to productive and fruitful conversations on the future of AI and, specifically, AI governance. 

Denise Williams, former CEO of the First Nations Technology Council, outlined how to apply seven-generation thinking to AI governance, honouring the generations that have come before and those who are yet to come when tackling AI-related issues. Williams suggests that, at any stage of the AI lifecycle, our actions should be guided by three questions: How will this system affect our languages, lands, and grandchildren? What stories will it amplify, and which might it erase or change? What will future generations inherit from our design choices today?  

Without paying adequate attention to Williams’ and other Indigenous viewpoints, Canada risks homogenizing the AI space and losing the nation’s very essence.  

Looking ahead 

The SAIER presents a snapshot of the complex AI ethics landscape, enabling diverse AI stakeholders to ground discussions and decisions in actionable insights.  

As Renjie Butalid, our Director and Co-founder, states in his opening foreword: “AI isn’t just about machines, models, or infrastructure. It’s about people.” To preserve considering humans at the core of AI research, we must continuously ask ourselves about the role we want humanity to play in shaping a future increasingly mediated by algorithms. Our response is the SAIER Volume 7, a community-led attempt to keep the question of humanity’s role in developing AI, alive.


Subscribe to our newsletter, the AI Ethics Brief, to help support our mission to democratize AI ethics literacy 

Next article