a, Science & Technology

Where is artificial intelligence headed?

In Stanley Kubrick’s classic 1968 film 2001: A Space Odyssey, the protagonist, astronaut Dr. David Bowman, has a standoff with his spacecraft’s artificial intelligence (AI) system, HAL. After discussing plans with a fellow astronaut onboard to deactivate HAL, Bowman attempts to re-enter the spacecraft from an external rescue mission. However, he is prevented from doing so by HAL, who calmly tells him that it will not allow him back inside because if he were to turn off the system, it would jeopardize their mission. 

This act of defiance by a computer makes it only fitting that a screening of this particular movie would follow a presentation on AI given by Dr. Jeremy Cooperstock, an associate professor in the Department of Electrical and Computer Engineering. Cooperstock’s  presentation, titled “Is humanity smart enough for AI?” took place last Friday in Redpath Museum as part of the “Freaky Friday” lecture series. He addressed contemporary concerns about the nature of AI—concerns that are, according to him, not exactly irrational.

 “The bottom line, I think, is that we need to be afraid,” Cooperstock said. “We need to consider the consequences [of AI].” 

Cooperstock quoted Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI), to underscore his point: “By far, the greatest danger of AI is that people conclude too early that they understand it.”

Indeed, one would think that those in charge of designing and developing the technology in question would have to, at the very least, understand how it operates. Incredibly, this is not always the case. Cooperstock recalled a story told by Dr. Geoffrey Hinton, a professor in the Department of Computer Science at the University of Toronto, involving the U.S. army, which had hired a group of researchers to pioneer a detection system capable of identifying camouflaged enemy tanks. 

The researchers set out to take hundreds of photographs of camouflaged tanks, hidden in the trees at the forest line, as well as photographs of the forest line in its natural state. This data was used to develop a ‘neural network’ for the recognition system; however, the researchers had failed to account for the weather on the days when the photographs were done; the photographs of the camouflaged tanks were taken on sunny days whereas the photographs of the forest line were taken on overcast days. This seemingly insignificant variation led to the system being a better gauge of the weather than of the presence of camouflaged enemy tanks.

Though this error proved to be relatively benign, other more disastrous  cases of AI gone wrong have been documented. In 1985, China Airlines Flight 006 almost crashed because the autopilot system had blocked the pilots from regaining control of the aircraft—until well after the plane had gone into a steep dive. 

Worse still, claimed Cooperstock, is the idea of automated military weapons, which have been ‘perfected’ to the point of not requiring physical human operation. In the past, computer hackers have been able to bring down military drones because the software is accessible and easy to replicate, and has no human checkpoint. 

Ultimately, Cooperstock noted that society’s greatest defence against what some might jokingly refer to as the robot uprising is spreading the word about the drawbacks of AI. 

“We need to ask very careful questions and have this discussion as a society about the potential implications of these tools that we’re building,” Cooperstock explained. “Most importantly, “ […] We have to be cautious and aware of the potential for things running amok.”

Share this:

Leave a Comment

Your email address will not be published.

*

Read the latest issue

Read the latest issue