Monday, November 4, 2024

The Many Meanings of Artificial Intelligence

Math and Science News from Quanta Magazine
View this email in your browser
Each week Quanta Magazine explains one of the most important ideas driving modern research. This week, computer science staff writer Ben Brubaker explores what modern AI systems have in common, and how they differ.

 

The Many Meanings of Artificial Intelligence

By BEN BRUBAKER

What is artificial intelligence? Depending on whom you ask, the term might refer to sci-fi entities, existing cutting-edge technology or even software we use every day. The anthropologist Richard McElreath summed up this confusion succinctly with a table containing possible examples of AI ranging from killer robots to dating app algorithms and beyond.
 

The roots of this ambiguity stretch back nearly 70 years. The computer scientist John McCarthy coined the term "artificial intelligence" in 1955 in a proposal for a workshop on "making a machine behave in ways that would be called intelligent if a human were so behaving." This definition is inherently subjective — it puts AI in the eye of the beholder. But the name stuck. Over time, it came to refer not just to McCarthy's dream of machines that behave like humans, but to all research in service of that goal.
 

That research effort has led to many breakthroughs that we've covered in Quanta: the algorithm that mastered the complex board game Go, large language models such as ChatGPT, and the system that recently earned its creators a Nobel Prize for revolutionizing the study of protein folding, among others. Whatever you think of the term artificial intelligence, these advances are at least consistent with the spirit of McCarthy's original definition. 
 

Some techniques invented by AI researchers have been repurposed for more mundane applications, like spam email filters and movie recommendations. More controversial applications include algorithms that can supposedly predict health care outcomes and criminal recidivism. Should all of this really count as AI? 


A Closer Look

To make sense of this, let's start with what these examples have in common: They're all based on an approach called machine learning. The developers of machine learning algorithms don't start with fixed rules for how those algorithms should behave. Instead, they begin by specifying a goal and a method that an algorithm can use to learn from data. Then they supply the algorithm with a large set of "training" data and let it adjust its internal mechanisms to improve its performance. John Pavlus walked readers through this process in a recent Quanta explainer. 
 

Machine learning comes in different varieties. Large language models and other state-of-the-art systems are built around mathematical structures called artificial neural networks, which are loosely inspired by the human brain. But other applications of machine learning use simpler techniques. That's often because fancier methods don't improve their performance: Machine learning algorithms are only as good as their training data, and that data might not capture all the aspects of the problem that a system is designed to solve. In a 2023 Q&A with reporter Sheon Han, the computer scientist Arvind Narayanan made a compelling case against using AI as a blanket term. It can mislead people into thinking that rapid progress in neural network technology will help with other applications where inherently messy data is the real limitation. 
 

The neural network–based systems responsible for many recent breakthroughs also differ from each other in important ways. Generating text and predicting the behavior of proteins are very different skills that require different data sets and training methods. But when it comes to the details of neural network design, researchers have increasingly converged on similar techniques for different applications. In 2022, Stephen Ornes wrote about how computer vision researchers have increasingly adopted a type of neural network called a transformer that was designed for use in machine translation. And in March, I wrote about how researchers are using the tools of theoretical computer science to study the fundamental limitations of transformers, and how they might affect the behavior of large language models. 
 

It's hard to have productive conversations about AI when nobody can agree on what the term even means. Being clearer with language won't resolve the many thorny questions about how AI systems work or their impact on the wider world. But it's a good place to start.

AROUND THE WEB

In a 2022 Substack post, Narayanan and his colleague Sayash Kapoor described four examples of striking failures by the predictive AI algorithms often used by governments, hospitals and other organizations. It's a sobering reminder that misuse of simple algorithmic tools can have serious consequences.

In 1950, the pioneering computer scientist Alan Turing imagined a hypothetical contest where a judge would try to distinguish between humans and future AI systems through conversation. In an essay recently published in Science magazine, the computer scientist Melanie Mitchell reflected on the limitations of the Turing test as a measure of intelligence.

Researchers and writers have come up with many metaphors to make sense of the behavior of large language models, comparing them to everything from parrots to genies to monsters dreamed up by the science fiction writer H.P. Lovecraft. In a 2023 blog post, the computer scientist Boaz Barak explained why he doesn't like any of them. (Barak is a member of Quanta's advisory board.)

Follow Quanta
Facebook
Twitter
YouTube
Instagram
Simons Foundation

160 5th Avenue, 7th Floor
New York, NY 10010

Copyright © 2024 Quanta Magazine, an editorially independent division of Simons Foundation

Scientist Pankaj

Today in Science: Hidden patterns in songs reveal how music evolved

...