a, Science & Technology

Hi, Robot: How smart are our gadgets?

 

Over the last 50 years, gadgets have evolved into faster, smaller, cheaper, and more accessible tools, becoming indispensable in our day-to-day lives. As technology’s role in our lives grows, so does the demand for more intelligent design. Shopping sites now predict customers’ preferences; cell phones can pay for coffee automatically; even thermostats can learn to predict when their owners will want the house a few degrees cooler.

But how does technology ‘learn’? And how intelligent are the technologies we have? 

While intelligence is already difficult to define in humans—let alone machines—mathematician Alan Turing proposed a definition to measure the artificial intelligence (AI) of a machine that still holds true today. AI, he said, is the ability of a computer to trick a human into thinking that it is another human. 

“[Turing was] trying to avoid all of these philosophical questions about ‘What does it mean to be self-aware,’ ‘What does it mean to be creative,’ said Jonathan Tremblay, a PhD candidate at McGill whose research explores AI in computer games. “Instead, AI is about building something that makes you believe it’s intelligent,” 

When asked to define AI, professor Gregory Dudek, director of the McGill School of Computer Science, gave a similar answer.

“What is AI? It’s hard to say, but I think of it as the replication of skills that humans have […] in machines,” Dudek said. “[AI research is] trying to replicate our ability to be creative, to solve problems, to think about things, to innovate; and so to fully define AI, we have to define intelligence. These are really slippery concepts, but they’re related to problem solving, adaptation, novelty, and creativity.”

A field of computer science called machine learning focuses on the adaptation aspect of artificial intelligence. 

“You want to figure out how an artificial agent can learn from interacting with its environment, a little bit like how animals learn by interacting with their environment,” described professor Doina Precup, a computer scientist at McGill’s Reasoning and Learning Lab. “The idea is that if you want an animal to do a certain thing, you give it positive rewards if it does it correctly and negative rewards if it doesn’t. We do very similar things with computer programs.”

It’s easy to start sliding down the slippery semantic slope of anthropomorphizing when talking about machine learning, but AI research is far from building sentient robots. While great strides have been made in machine learning, most machine learning algorithms are limited to specific tasks. A program that learns to play chess, for example, won’t be able to transfer that knowledge to checkers. DeepMind, a project now owned by Google, was able to master a number of old video games but couldn’t apply what it learned from one game to another. 

Recognizing abstract concepts comes naturally to humans, but computers have a much harder time with it, which makes designing programs that can apply what they know from one problem to another a difficult task. 

This gap in reasoning has major implications for the roles that machines can fill. The real world, after all, is full of abstract concepts and general problems. A device’s ability to operate in the real world is also dependent on its ability to interpret instructions from people, which influences how well it can be integrated into everyday life.

The difficulty in producing a machine that can perform a broad range of tasks means that the world is populated by many different devices, each performing one task and using AI principles to “learn” how to interact with people in that specific way.

These applications have immense potential to improve the quality of people’s lives and have been made evident in the field of medical diagnostics. Computers have the ability to analyze huge amounts of data that enable them to examine the results of diagnostic tests such as MRIs or CAT scans to search for signs of disease. 

Precup’s research explores methods of incorporating AI into medical sensors and imaging systems.

“A lot of the stuff I work on is at the interface with recording devices,” Precup explained. “So for example, you have a patient that’s hooked up to measurements of respiratory frequency and cardiac signals. Then you may want to look at that data and have a learning algorithm that predicts whether the patient will get in trouble or not so that an alert can be put out to the doctor. I’m also interested in medical imaging, so looking at images of brain volumes in patients who have multiple sclerosis. They use artificial intelligence and machine learning in order to pinpoint the areas of the brain where the problems are to measure how bad the problem is.”

Despite this progress, the application of AI to health care is by no means intended to replace doctors any time soon. 

“[These] programs complement the work that doctors are doing,” Precup said.

A more visible—and for many students, more familiar—domain where AI has been applied is in the world of video games. Although people usually think of computers as taking on an adversarial role in games, Tremblay’s research looks into seeing how AI can enrich players’ experiences within a game. Essentially, he is trying to design companion characters in video games that act as if real people were controlling them. 

“What you’re trying to achieve is this autonomous AI that is playing with the player, and [the player] believes that they’re interacting with another human,” Tremblay said. “So this becomes a harder domain of trying to understand where things are, and what [the character] should be doing, and what the player wants to do.”

Even devices not traditionally considered to be smart are being affected by developments in machine learning. Google’s self-driving cars would have been unthinkable 20 years ago. Thermostats, watches, and smoke detectors can now connect to the internet, creating an “internet of things” that enables communication between devices—just like the web enables communication between people. 

“The internet of things is all about having things that adapt,” Dudek said. “Having a thermostat that is on the internet, but doesn’t learn, doesn’t adapt, is kind of pointless.”

“Full artificial intelligence” is still a long ways off, but such concerns affect how we incorporate AI into our lives in the short term. 

“Humans’ willingness to trust automated systems and to use them and act with them is perhaps the lynch pin that is the most important determinant in the next 10 years of how much robotics we see in the world,” Dudek said.

Even with technical and publicity challenges, the trend of increasingly connected and adaptable technology doesn’t appear to be going away any time soon. 

“Predicting the future is canonically hard, but I think it’s fair to say that I can imagine a world where most of the things are smart to some extent,” Dudek said. “And so all of a sudden the whole world will become responsive to what we want and how we want to act. Now, how does that play out as a society? That I can’t say, but I think it will be a very exciting time.”

Share this:

Leave a Comment

Your email address will not be published.

*

Read the latest issue

Read the latest issue