My friend AI told me
Navigating the blurred lines between digital companionship and corporate efficiency
The end of 2022 was marked by the public release and rapid democratization of OpenAI’s ChatGPT, one of the first generative artificial intelligence (AI) tools to become widely accessible to the general public. Since then, AI’s presence has been rapidly increasing in our daily lives and integrating itself into many of the tools we use, such as social media platforms or search engines. This gradual shift has changed the way we work, learn, and go about our personal lives, driven in part by the constant promotion and integration of AI across digital platforms and everyday technologies.
This technology has prompted growing questions surrounding its use. As students and professionals are increasingly urged to embrace AI for the sake of survival in a competitive landscape, the technology is often framed as an inevitable partner. However, we must remain cautious: By prioritizing algorithmic convenience over genuine human effort, we risk trading our critical thinking for a “frictionless” efficiency that hides a deeper cost to our creative and intellectual autonomy. AI remains a relatively recent technology and cannot always be considered a fully reliable tool.
This is one of the main ideas defended by Adam Dubé, associate professor of Learning Sciences and Director of the Technology, Learning, & Cognition (TLC) Lab, in an interview with The Tribune. His work focuses on educational technology and cognitive development,
Dubé’s research on home voice assistants highlights what he calls the “theory of artificial minds.” The concept is inspired by the “theory of mind,” which describes the human ability to understand the thoughts, beliefs, desires, and emotions of others. Dubé’s studies show a clear cognitive evolution in children. Younger children—those around four years old—often attribute human intentions to these devices when older children—around eight years old—learn to see them as programmed machines. This shift from perceiving AI as something that is almost alive to understanding it as a technical tool is an important cognitive step. Yet many users, including adults, remain stuck in that earlier phase of emotional trust, treating AI responses as if they came from an intentional and reliable source.
This emotional trust leads to a deeper pedagogical risk: The shift from using AI as a support to using it as a substitute. Dubé explains that the danger lies in prioritizing the final product over the cognitive effort required to create it.
“Generative AI enables students to produce more polished writing, but the tool is doing the writing for them. They are submitting better assignments, but they aren't necessarily learning how to develop better ideas. So students have better assignments […] but [they] are not learning how to make better assignments.”
This shift suggests the emergence of what could be described as “performance dependence,” a phenomenon where the final output becomes more important than the human process used to achieve it. In this state, results and productivity are prioritized over actual mastery of the subject, and as this dependency grows, the focus moves from the human process of learning to the machine's ability to perform.
This pressure is already manifesting in the workplace. In an Instagram survey conducted by The Tribune, one respondent echoed this sentiment.
“AI is deeply evil and harmful to our world and brains, but my boss mandates we use it.”
This highlights a growing tension: While AI is a powerful tool for efficiency, it may come at the cost of the intellectual autonomy and critical thinking that humans are supposed to maintain.
Dubé points to what he describes as a fundamental mismatch in how people approach this technology.
“Most students don’t use AI to help structure their thinking, [rather] they use AI to do the thinking for them,” he explains. “These systems are designed to provide answers that satisfy the user as quickly as possible so they keep using them. They are not designed for learning, they are commercial products.”
While using AI may feel innocent for mundane tasks like making a grocery list, it becomes dangerous when users turn to these tools for guidance on complex personal matters.
“These systems are designed to provide answers and please the user. This can become problematic when users seek advice on personal matters, such as relationships or mental health, because even when the system lacks expertise in these areas, it will still produce a response,” Dubé added.
Thus, even if the system provides an answer, is it the right one? The concern here goes beyond mere factual accuracy. It touches on whether a non-human entity should be weighing in on experiences that are fundamentally exclusive to the human race. Grief, heartbreak, and moral dilemmas are not data points to be calculated; they are lived realities that require empathy, not just an algorithm. By seeking advice from a machine, we opt for an interaction where we aren't challenged by the difficulty of handling another person’s opinion, trading the soul of human connection for a script that imitates empathy without ever having felt a single emotion it describes. And because the responses appear supportive, users often interpret them as meaningful wisdom when we get them from a product designed to satisfy us in the fastest way possible.
This concern echoes the perspective of Renée Sieber, associate professor in the Department of Geography and one of the Top 100 Brilliant Women in AI Ethics for 2025. In an interview with The Tribune, Sieber began by clarifying a distinction often lost in current debates: While “AI” has become synonymous with generative AI, algorithmic systems have actually operated in the background for years—notably in Canada’s visa pre-screening processes.
By moving from invisible background code to human-sounding “trustworthy assistants,” these systems adopt a veneer of authority that makes them appear more reliable than they truly are. This shift mirrors our existing digital habits. Over the past two decades, digital platforms have transformed how we interact, moving from face-to-face discussions to messages and posts. This illustrates the concept of “alone together,” introduced by researcher Sherry Turkle, describing a world where we are constantly connected, yet experience growing social distance.
While a real friend provides the “friction” of disagreement or judgment, the machine listens without conditions. We are now moving from messaging friends to asking a machine to refine our lives, scripting our most intimate human duties before they even happen: ‘GPT, write me the script for my breakup,’ ‘GPT, write me the script for my interview,’ ‘GPT, what should I answer to this message?’ By scripting these situations in advance, we avoid the anxiety of a raw reaction, but we also alienate ourselves from the actual experience. This is the dependence on this constant and unconditional support, something humans are not built to provide, that ultimately isolates us.
This desire to remove personal friction carries over into the professional world. Sieber is blunt about the harsh reality facing the modern workforce.
“AI is used to increase productivity. The hard truth is that it often means firing people and smoothing the rest,” Sieber said. “Productivity ends up meaning using AI instead of hiring people, when efficiency could sometimes mean hiring someone.”
In this framework, productivity becomes synonymous with eliminating the “human-in-the-loop.” The risk is that by treating human judgment as inefficiency, organizations may sacrifice the nuance and creativity that machines cannot reproduce.
This obsession with mechanical speed is clearly reflected in the results of The Tribune’s poll of 83 respondents, which show that response speed is the primary driver of AI adoption. While 49 per cent of respondents report using AI at least a few times a week, it is the demand for immediate results that stands out: among AI-users, 58 per cent cite “immediate response” as the main reason they prefer AI over a human.
These numbers reveal a profound shift in how we handle personal effort. Respondents often use AI as a proxy to avoid human interactions or the time required for self-improvement.
One wrote: ”I'm not gonna ask my mother to rewrite my essay in a more polished way.”
By using AI as a shortcut for tasks that used to require human feedback or personal labour, we are gradually outsourcing our own development. While 99 per cent of respondents claim they could not have a relationship with AI, nearly nine per cent admitted they would feel more comfortable sharing a personal problem with an AI than with a real person. This suggests that for some, the lack of judgment and the instant availability of the machine already outweigh the value of human connection, choosing the ‘simpler’ path, even when it means sacrificing the depth that only the effort of human interaction and disagreement can provide.
Beyond the workplace, AI has become a high-stakes geopolitical matter, reflecting a nation’s digital sovereignty. This isn’t just about borders—it’s about controlling data, infrastructure, and algorithms. In this global race, Canada holds a prestigious position, thanks in large part to pioneers like Yoshua Bengio and Geoffrey Hinton. Bengio, founder of the Quebec Artificial Intelligence Institute (MILA) in Montreal, is a leading advocate for ethical AI, while Hinton, Professor Emeritus at the University of Toronto, has warned of the existential risks posed by the technology he helped create. Both received the Association for Computing Machinery A.M. Turing Award in 2018 for their breakthroughs in deep learning.
MILA has turned Montreal into a global deep learning hub, attracting billions in investment. Yet, as Sieber cautions: “Without control over our own infrastructure and data centers, our digital sovereignty is sacrificed […] we risk becoming mere 'digital tenants' of private foreign entities.”
In practice, renting hardware and storage from global tech giants means Canada produces the “brains” of AI without owning the “body.” This structural dependence highlights a tension: AI is presented as a tool for us to use, but we often don’t fully understand how it works or who ultimately benefits. This underscores that innovation alone isn’t enough—control and accountability must follow if AI is to serve the public good.
To understand the aggressive promotion of these systems, we must also look at what fuels them. Sieber is skeptical of the grand narratives surrounding Artificial Intelligence.
“I don’t believe in ‘super intelligence,’” she said. “Our data is the gold mine.”
In other words, the real value of these systems may not lie in their intelligence, but in the vast quantity of information they collect from users. Every prompt, correction, or interaction becomes part of a continuous feedback loop that helps improve the technology. What appears to be a simple tool for convenience also functions as a massive data-gathering infrastructure.
If AI systems depend so heavily on user input to improve, who ultimately benefits from that collective labour? While individuals gain speed and convenience, the long-term value of these interactions largely accrues to the companies developing the technology.
The boundary between machine and companionship is being systematically blurred through a campaign of aggressive promotion. Advertisements appearing in the McGill metro station—proclaiming that “the future is cyber-friends”—are not merely slogans; they are strategic attempts to manufacture a new vision of normalcy. This promotion packages algorithmic interaction as friendship to mask a colder reality: We are being conditioned to outsource our relationships, our creativity, and our critical thinking to corporate products we do not fully understand.
This manufactured normal suggests that the total integration of AI is both inevitable and benign. However, as Sieber points out, this narrative is designed to keep us from questioning the speed of the shift. Behind the ‘magic’ of the interface lies an opaque infrastructure that values speed over substance. It prioritizes “30-day sprints,” the government’s lightning-fast 2023 consultations, sacrificing democratic engagement and human rights at the altar of corporate efficiency.
AI can imitate the sound of a friend and the structure of an argument, but it cannot assume the human responsibility of deciding what kind of world we actually want to build. In our rush, we must avoid a repeat of the 2010s “tablet craze,” where technology was adopted simply because it was new. The power to choose humanity over mere efficiency still belongs to us—provided we don't prompt it away.