Commentary, Opinion

Albania’s new AI minister is begging for failure

Earlier this month, Albania’s prime minister Edi Rama presented a novel push in technology: An AI member of parliament named Diella, dressed in traditional Albanian clothing. Diella’s work responsibilities include trying to combat corruption, hiring tenders for infrastructure projects, and navigating users through Albania’s websites to ensure easy access to legal documents. 

The introduction of AI into government structures is not unique to Albania; the Quebec Superior Court recently introduced a pilot project empowering judges to use artificial intelligence for research and writing assistance while judging cases. The Court has created 10 distinct robots, each with its own specialty in Quebec law. 

As AI grows in prevalence as a tool of institutional modernization, McGill students and faculty members are, too, adopting the technology to accomplish academic tasks. McGill has already implemented AI chatbots from Microsoft’s Copilot system to aid professors in communicating with students and other faculty. Yet with instructors given relative autonomy in their use of AI, professors must be cautious in using the generative tool in ethical ways, maintain academic integrity and transparency, and verify the accuracy of the AI-informed content presented in their classrooms. 

While it is tempting to hastily and indiscriminately harness the power of AI for the sake of efficiency and perceived progress, McGill—like Quebec’s courts and Albania’s parliament—must be cautious to protect against the dangers institutional reliance on AI technology poses, principally its struggles with accuracy and thorny questions of accountability that arise when these bots malfunction. Reliance on AI in complex institutional settings risks introducing new complications while leaving systemic issues unaddressed. 

In the case of Albania, Dellia represents a shallow attempt to solve the Albanian corruption crisis. Prime Minister Rama first introduced Diella at a Socialist Party conference on Sept. 11, insisting that the use of AI in the hiring of tenders would help remove bias. According to Transparency International, Albania faces high levels of corruption with a score of 41/100 on the Corruption Perceptions Index, a problem Rama is attempting to tackle in line with the nation’s goal to join the European Union by 2030. Using AI to achieve high-profile goals may seem to signal Albania’s dedication to being at the forefront of technological progress in the Balkans, but rather testifies to the nation’s reckless disregard for the risks of hyperreliance on AI technology.

Applications of AI in alternative settings reveal the shortcomings of relying on machine learning in high-profile contexts. Studies show up to 40 per cent of companies rolled back AI initiatives in July 2025, voicing concerns over its reliability and serious bug issues. Google’s AI model Gemini was shown to display emotional breakdowns and refuse to finish prompts, with developers admitting they were unable to explain the behaviour and did not understand the model’s response. If Diella were to have the same breakdowns, it could have serious ramifications, such as shutting down infrastructure development in the country or terminating existing contracts and deals.

More worryingly, a recent report found that AI models were even willing to disobey commands and blackmail prompters if the bots were threatened with shutdown or suspected that they would be turned off. If Diella were to exhibit similar behaviours, the consequences could be much larger than seen in commercial chatbots, as Diella has extensive access to the government’s databases and authorization to give contracts. 

AI also has a problem of unintelligibility in its thinking, a phenomenon known as the black box effect. This makes understanding generative models’ actions and decisions almost impossible for users, stirring concerns about controlling AI behaviour. Diella’s motives could be hidden from Albania’s government; Quebec court judges could offer verdicts with nonsensical rationale; McGill communications could be impaired and made incomprehensible. 

The push for AI integration is too aggressive, and much of its implementation is seemingly more symbolic than practical. As institutions race to appear more technologically progressive, they must resist the temptation to adopt AI merely for prestige or convenience. To be a leader in innovation is not to place blind faith in algorithmic solutions.

Share this:

Leave a Comment

Your email address will not be published. Required fields are marked *

*

Read the latest issue

Read the latest issue