Editorial, Opinion

Canada’s AI strategy risks further propagating anti-Black racism

In September 2025, Minister of Artificial Intelligence (AI) and Digital Innovation Evan Solomon created the federal AI Strategy Task Force to provide recommendations on the role of AI in Canadian economic and social life. The Task Force conducted an extensive consultation of over 11,300 industry leaders, academic researchers, and civil society stakeholders to inform the government’s AI strategy, with particular emphasis on ethical research, transparent regulation, sovereign infrastructure, AI literacy, and security safeguards.

Yet, its composition and policy vision contain a critical failure: By excluding meaningful Black representation and refusing to directly confront how AI systems reproduce anti-Black racism, the Task Force has condoned and enabled racial harm across the infrastructures that AI is being built to govern.

On paper, the Task Force presented itself as a conglomerate of expert opinion and guidance, a time-limited advisory body assembled to generate ‘actionable’ recommendations on Canada’s AI development, governance, and usage. Beneath this consultative framing, however, is a structural absence of racial equity. 60 Black Canadian scholars have publicly cited underrepresentation on the Task Force. No sector is dedicated to equity in AI, and when the issue of equity does appear, it typically refers to equity of access rather than ensuring that these AI tools function equitably. In an open letter to Minister Solomon, over 40 groups and more than 100 individuals expressed concern regarding the AI strategy’s potential to automate anti-Black racism into decision-making tools used by the government, public sector, and private industry alike. By downplaying regulatory safeguards, the strategy prioritizes commercialization and global competitiveness, reflecting a preference for economic advancement over harm prevention.

AI systems already produce racial disparities in policing, immigration, facial recognition, hiring, loan rates, and health care allocation. These outcomes reflect the absence of marginalized voices within the designs of these systems and their strategies. Workforce exclusion intensifies this as Black workers remain overrepresented in sectors most vulnerable to automation whilst underrepresented in the industries designing these systems, further widening racial wealth and labour gaps.

AI’s capacity to reinforce systemic discrimination is a product of its design; bound by the data it is trained on, AI replicates the discriminatory nature of its inputs and is unable to self-correct. An MIT study on facial recognition found near-perfect accuracy for light-skinned men but error rates exceeding 34 per cent for dark-skinned women, reflecting the lack of diversity and representation within the training datasets for such software.

Studies on large language models reveal similar dynamics: Prompts such as “Black people are ___” generate disproportionately negative traits and associations. Though overtly racist outputs have declined through corporate filtering, covert bias persists, with software assigning lower-paying jobs, harsher criminal outcomes, and deficit-based characterizations to Black individuals. Without representative development teams, transparent datasets, and continuous auditing, AI systems risk formalizing anti-Black racism within the infrastructures governing social and institutional life.

Generative AI also has significant environmental implications. Data centres require immense energy consumption, water extraction for cooling, and the mining of minerals that drive ecosystem degradation and produce major carbon emissions. As these facilities proliferate, their environmental burdens are unevenly distributed. Environmental racism scholarship has long documented how polluting infrastructure is disproportionately placed in marginalized communities.

This pattern is visible on a global scale, from contaminated water crises in predominantly Black municipalities to the concentration of industrial and digital infrastructure in racialized neighbourhoods. In Africville, a historic Black community in Halifax, residents were denied sewage and water services while landfills, slaughterhouses, and infectious disease facilities were built nearby, posing severe health risks to community members. As AI is increasingly integrated into urban planning and infrastructure modelling, such systems risk reproducing these same spatial inequalities, recommending the placement of high-emission facilities in the very communities that already bear disproportionate environmental risk.

AI bias extends into education as well. Automated admissions, grading systems, and classroom tools are often deployed without critical oversight. Yet universities remain fundamentally underprepared. At McGill, AI governance is still framed primarily in terms of academic integrity rather than structural equity. While existing AI policies have acknowledged bias, they lack tangible enforcement mechanisms, shifting responsibility to individual students and instructors.

Canada’s AI strategy cannot be equitable without Black representation embedded at every level of design, regulation, and deployment. As AI infrastructure expands, Canada must now determine whether technological advancement will mitigate historical injustice or continue mechanizing it. 

Share this:

Leave a Comment

Your email address will not be published. Required fields are marked *

*

Read the latest issue

Read the latest issue