An inclusion and equity lens is needed to have technology that doesn’t amplify societal biases and further marginalize minoritized folks. In the new age of artificial intelligence in the future of work, women risk being left behind. The 2020 World Economic Forum report found that women comprise only 26 per cent of data and AI positions. Having diverse and inclusive teams mitigate this risk, and we can begin by increasing the number of data-literate women and upskilling and supporting women currently in data professions.
At Toronto Womxn in Data Science, we aim to empower a million women to become data literate, increase the recruitment and retention of women in data professions, and influence inclusive innovation. In 2018, we hosted the first Women in Data Science Conference in Canada, and since then, we’ve built a community of over 3,600 women. Our programs include a podcast, resource hub, events and a job board to help further our mission. We have a national presence and are building steam in the United States. Our 6th Annual Conference & Awards will be on April 26 and 27, 2023.
A quick search for AI bias will turn up multiple headlines of shortcomings of products and services powered by AI. You’ll find various industries represented, and diverse communities are consistently impacted. There are headlines about data privacy, housing, health, justice, social welfare, recruitment, and more. These headlines spotlight the gaps that exist in the products and services powered by AI.
Do you remember Maslow’s hierarchy of needs? Most commonly depicted by a pyramid, it shows human motivations from the most basic to aspirations. From bottom to top, the sections are Physiological needs, Safety needs, Love and Belonging, Esteem, and at the peak, Self-Actualization.
The majority of the headlines that you’ll find when searching for AI bias fall within fundamental physiological needs and safety needs like being denied a mortgage, being filtered out in the recruitment process, or being misdiagnosed. If tech threatens the essentials like physiology and safety, it can impact the journey to self-actualization and leave minoritized communities behind.
Why the status quo can no longer hold
AI relies on historical data, which, if gone unchecked, further amplifies biases that exist in society. The risks are not solely that datasets are biased, but some data is never collected. Some social issues that impact minoritized groups never receive the attention and resources needed to lead to creative solutions. On the other hand, some minoritized groups experience excessive surveillance. We can shift the landscape and build more inclusive tech that results in less harm by diversifying the AI community and ensuring that inclusion principles are a part of the culture.
It’s not enough to put existing AI practitioners through diversity and inclusion training, partly because they are subjected to the privilege hazard. Those who suffer from the privilege hazard are poorly equipped to recognize oppression in the world. They lack lived experience, and despite how proximate they get to minoritized groups, their lack of lived experience limits their ability to foresee and prevent harm. Diversifying the AI field isn’t a step we can skip, and we should start with women.