Science & Technology

Navigating the government’s guide to employing generative AI in the public sector

On Nov. 30th, OpenAI’s release of ChatGPT marked its one-year anniversary. Within a relatively short period, this generative AI (GAI) brought tremendous changes in everyone’s lives. Between huge layoffs in administrative professions, and widespread controversies, such as the debate around the use of AI in classrooms, it seems crucial to survey the implications of such a tool. 

In September 2023, the Treasury Board of Canada Secretariat (TBS) published a Guide on the Use of Generative AI, providing an overview of GAI, along with challenges and concerns for responsible use as well as policy considerations. Researchers from McGill and the University of Toronto were invited to submit their feedback on the guide. They discussed their comments in a recent paper encompassing sociopolitical aspects of GAI while analyzing the guide’s strengths and weaknesses. 

Despite assessing the guide as “fit for purpose,” the paper identifies three key pathways to strengthen it: Direction on drafting new federal legislation for more comprehensive and enforceable rules, ethical sourcing practices, and environmental impact mitigation. 

The researchers emphasize the need for a more enforceable policy framework that offers greater accountability and public trust in GAI in the public sector. Regulation of the foundational model is particularly crucial as various applications of GAI use it as a base. Pre-existing biases are almost impossible to detect, and the harvested data used for its training ranges across public and private data, raising concerns about copyright violations. 

Moreover, GAI requires extensive data input. The workers have to clean, annotate, and prepare the possibly harmful data to remove biases and graphic material for the production of ‘quality’ data. Worker protection, under the proposed ethical sourcing practices, is highly compromised. For instance, Meta, OpenAI, TikTok, and Big Tech ignored Kenyan labour laws for their data workers in Kenya. Even though those companies are valorized for kick-starting the GAI revolution, they are infamous for hiding the extent of the harm; severe mental distress and even suicides of content moderation workers are well-documented. 

“As long as the status-quo business model remains profit maximizing and shareholder-driven, the exploitation will continue,” Ana Brandusescu, PhD candidate, and Renee Sieber, associate professor, both researchers at McGill’s Department of Geography, wrote in an email to //The Tribune//. 

According to Brandusescu and Sieber, extending Canadian labour laws to international data workers may resolve such exploitative practices.

“We have to go beyond soft law such as guidelines and standards to more concrete and consequential measures that are respected and enforced,” Brandusescu and Sieber wrote. “[Possible measures could be] prohibiting federal agencies from using certain products and services, remove offending companies (e.g., those who violate human rights laws) from preferred vendors’ lists, and to name and shame companies who do not comply with rules and regulations.”

The paper compliments the guide on its attention to GAI’s environmental impact, but suggests that it should include a more comprehensive examination of the environmental impact caused in each phase of GAI’s software and hardware lifecycle. Despite the immense amount of greenhouse gas emissions and water consumption for AI data centres, AI’s environmental footprint often goes unacknowledged. 

A more thorough guide may also encourage companies to incorporate feasible solutions, such as energy-efficient architecture or carbon-awareness tools. Deep Green, a UK-based company that pairs a data center with a swimming pool to transfer the heat generated to warm up the pool, is a creative alternative to letting the heat energy go to waste. 

GAI is an issue creating radical changes in numerous social areas while posing substantive harm. This issue requires a tangible enforcement of regulatory policies since a naive, hypothetical discussion about it would not propel much change. 

“Is it worth it for one of us to have AI-enabled convenience if it means that three people will lose their jobs, [Internet Protocol (IP) addresses] will be stolen, and workers in low and middle-income countries will be exploited?” Brandusescu and Sieber wrote.

Now is the time to ponder what benefits GAI brings us, whether it is actually being economically and socially distributed, and pay consistent attention to relevant policies so that the suggestions as above can come into effect.

Share this:

Leave a Comment

Your email address will not be published.

*

Read the latest issue

Read the latest issue