Home » Industry & Business » The Canadian Push for Responsible Artificial Intelligence
Future of AI

The Canadian Push for Responsible Artificial Intelligence

Data chip
Data chip

Artificial Intelligence (AI) is omnipresent in our world today. Our lives are made easier by AI tools, from highly-visible assistants such as smart houses and self-driving cars to the more discrete help of recommendation algorithms, facial recognition, and live-stream caption translation. Moreover, our jobs are changing, as old business practices are being revolutionized by intelligent automation and AI-powered understanding of customers, and new business models arise thanks to Industry 4.0 innovations. From health care to retail to politics to entertainment, our daily interactions with modern society are helped or, in some cases, made possible by AI tools.

A lot of attention is given to advances in algorithms. Landmark public successes of AI, such as AlphaGo’s victory against professional Go player Lee Sedol, are much reported on in the popular press. In research and development circles, it is rather the increasing availability of pre-trained deep networks, especially for image processing and for language understanding, that is generating excitement. And while these are all noteworthy achievements, the focus on algorithms does tend to overshadow the second pillar of good AI systems: data. An AI system is only as good as the data it is trained on, and a single algorithm can surpass every benchmark or be worthless depending on what data it learns from. The pre-trained deep networks made available online are exciting not only for their network architectures, which anyone can replicate, but for the quality of their training on datasets of unprecedented sizes. AlphaGo, while clever algorithmically, could only win against Sedol after training on tens of millions of games, far more than Sedol played in his lifetime. 

Good data makes for good AI, but bad data does not simply make for bad AI: it can make for harmful and socially-dangerous AI. Poorly trained AI technologies can have unintended consequences that promote discrimination, reinforce inequalities, infringe upon human rights, disrupt democratic processes, and intensify the unfair treatment of marginalized and minority groups1. To understand this, one must realize that data fundamentally stems from humans: it is generated by humans, collected by humans, and labelled by humans. All humans have personal biases, preferences and prejudices, and these often end up being accidentally integrated into the datasets. In turn, AI systems trained using biased datasets learn the underlying prejudices as their ground truth and strive to replicate them perfectly. 

Examples of these failures of AI abound today. Amazon’s employee recruitment AI was trained using the CVs of current employees; in the male-dominated company, the system learned to devalue women applicants regardless of competence2. Apple’s credit card program faced the same issue: after training on a gender-biased credit history dataset, it learned to offer men credit limits that were ten to twenty times higher than those it offered to women with exactly the same financial assets3. The Netflix movie recommendation algorithm trained itself using real viewers’ watching habits, including racialized users’ preference for inclusive shows that give fair representation to their race. It also learned to misrepresent white-dominated movies to these users by generating misleading preview posters showcasing token minority actors4. Likewise, crime prediction AI meant to optimize the use of police resources learned, from human police officers’ history of racially-selective law enforcement, to allocate resources to heavily police minority communities and neighbourhoods5. The predominance of white men in picture datasets has made person detection and facial recognition AI very good at recognizing such users but conversely very bad at recognizing women and racialized people6, with consequences ranging from people of colour being disproportionately misidentified as wanted criminals7 to being more likely to be hit by self-driving cars8.

The Canadian AI community has adopted a unique strategy in responding to these problems by mobilizing itself towards the goal of responsible AI development through a transdisciplinary approach. Indeed, computer scientists alone cannot identify, understand and solve all the issues that stem from the misuse of AI; this is a challenge that will require input and expertise from every area impacted by AI, from health care to finance to social sciences. Rising to the challenge, researchers and practitioners from every area impacted by AI are coming together to develop new solutions toward algorithmic fairness, ethics, transparency, and accountability. Together, they wrote the Montreal Declaration for Responsible AI9 and created the International Observatory on the Societal Impacts of AI10, both with the mission of encouraging researchers worldwide to commit to working on ethically-responsible AI projects. They are also joining forces in new research and training programs created to promote ethical AI design and accountable AI applications, such as the NSERC-funded program on the Responsible Development of AI11. Through these cohesive, transdisciplinary initiatives, the Canadian scientific community is making headway on this complex technological and social issue.

Members of the public also play a critical role in this transdisciplinary effort toward responsible AI. They are the ultimate users of AI systems — and they are the ones seeing and experiencing first-hand the negative impacts of bad AI systems and bringing these impacts to the forefront. Only by retelling such personal experiences with AI technology, researchers, developers, lawmakers, and the public can fully understand the sheer impact these systems can have. These reports not only help to identify failures of existing AI products that need to be corrected but also serve as cautionary tales, reminding us all how critical it is to design this technology in an inclusive and fair manner.

This summer, for the second year in a row, the Canadian AI Conference14 will host a dedicated track in its program for Responsible AI research. In addition, the Canadian AI Association15 (CAIAC), which organizes the conference, is partnering with the Canadian Institute for Advanced Research16 (CIFAR) in order to reach out to tech-minded students in underrepresented minority communities, such as First Nations, to facilitate their active participation at the conference and their engagement with the broader Canadian AI community. These initiatives are laying the groundwork for the future of AI in Canada: a future that will be more inclusive, more fair, and more responsible.


References

[1] https://www.unesco.org/en/artificial-intelligence/recommendation-ethics 

[2] https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G 

[3] https://www.nytimes.com/2019/11/10/business/Apple-credit-card-investigation.html 

[4] https://www.nytimes.com/2018/10/23/arts/television/netflix-race-targeting-personalization.html 

[5] https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ 

[6] https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/ 

[7] https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28 

[8] https://www.businessinsider.com/self-driving-cars-worse-at-detecting-dark-skin-study-says-2019-3 

[9] https://recherche.umontreal.ca/english/strategic-initiatives/montreal-declaration-for-a-responsible-ai/ 

[10] https://observatoire-ia.ulaval.ca/en/ 

[11] http://responsible-ai.ca/ 

[12] https://www.bbc.com/news/technology-33347866 

[13] https://www.youtube.com/watch?v=t4DT3tQqgRM&ab_channel=wzamen01 

[14] https://www.caiac.ca/en/conferences/canadianai-2023/home 

[15] https://www.caiac.ca/ 

[16] https://cifar.ca/ 

Next article