Law Professor, Université de Montréal
Artificial intelligence (AI) is already transforming our lives. Businesses are keen to adopt the technology to develop new products and services and create efficiencies. Governments are casting glances toward AI to solve some of the most pressing problems facing our communities. While there are important benefits that come with AI, equally important is considering and responding to the risks of AI, including privacy, equality, and the environment.
Given that digital technologies, and in particular AI, are developing fast, there’s been a recognition by some groups that an appropriate ethical framework was being left behind. This responsible approach to AI is essential to minimize the negative impacts and ensure that the technology is used to improve individual and collective well-being.
“We need to always ask questions about the use of AI,” says Catherine Régis, a Law professor at the Université de Montréal. “Is AI appropriate for a particular issue? Are there risks associated with its use? How can we reduce such risks? And who will benefit from this technology?”
Is AI appropriate for a particular issue? Are there risks associated with its use? How can we reduce such risks? And who will benefit from this technology?
A guide for responsible AI
Talking about the Montreal Declaration, Régis adds that an ethical framework was needed to frame the responsible development and deployment of AI. It provides a compass to guide our choices regarding AI, and whether or how we should use it in different contexts. “This framework is important, and it must evolve with binding norms, such as legislation, that define clear expectations and potential sanctions for the AI community when those norms are not respected,” she says.
Given that Canada is a global leader in AI research and talent, it isn’t surprising that Canadian institutions are leading the conversation on the development of responsible AI. Five years ago, the Université de Montréal and the Québec Research Funds launched a co-construction process, which included reflection and consultation with over 500 citizens, academic experts, entrepreneurs, and professionals. The result was the Montreal Declaration for a Responsible Development of Artificial Intelligence.
Accelerating ethical AI awareness
The Declaration is not merely a statement but rather guides any person or organization that wishes to participate in the responsible development of artificial intelligence. Several research centres, including IVADO, Mila, Algora Lab, and the International Observatory on the Societal Impacts of AI and Digital Technology, are already building on the Montreal Declaration for different projects. And education and training programs are available to train the next generation of AI experts.
More than 2,200 individuals and 200 organizations have signed the Declaration, and what emerged from recent consultation with many signatories is that organizations are often embracing the Declaration’s principles to create their own tools, sometimes adapted to their operational needs and organizational constraints. While some organizations have the necessary resources for this, there’s also strong demand for implementation support, including further developments of practical tools that give life to the Montreal Declaration principles in different contexts.
“The Declaration was not designed as a turnkey tool to solve all AI-related problems,” says Régis. “And while there’s a perceived tension between ethics and competitiveness for industry, the Declaration formulates general principles that can accompany an organization’s thinking, rather than constrain them with rigid requirements.”
According to Régis, the impact of the Declaration is notably the awareness and sensitization to AI ethical issues it creates. “This is a necessary first step to reduce potential negative impacts of AI and to think about solutions to develop and deploy it in a responsible way,” she says.