Monday, May 20, 2024

Why Computer Scientists Study Hard Problems

Math and Science News from Quanta Magazine
View this email in your browser
Each week Quanta Magazine brings you an update on one of the most important ideas driving modern research. This week, computer science staff writer Ben Brubaker explores how and why scientists study the nature of computational difficulty.

 

Why Computer Scientists Study Hard Problems

By BEN BRUBAKER

Why are some problems harder than others? It sounds like the sort of question that might occur to a philosophy major stumped by the second half of a math test, but it's actually one of the most profound questions in computer science, powering a field called computational complexity theory. Over the past 50 years, complexity theory research has yielded insights into the fundamental nature of computation, as well as practical applications in cryptography, parallel computing and many other fields.

Complexity theorists study problems that can in principle be solved using step-by-step procedures, or algorithms. These are called computational problems, but they don't necessarily have anything to do with computers. Alphabetizing your bookshelf, for instance, is a computational problem, and there are many algorithms you can use to solve it. Here's one: Put all the books on the shelf in a random order, then sweep them off and try again until you happen to get it exactly right. It doesn't take a doctorate in computer science to know that practically any other strategy will get the job done faster.

When complexity theorists talk about the difference between easy and hard problems, they're really drawing a distinction between fast and slow algorithms. Like alphabetizing a shelf, most computational problems can be solved using many different algorithms. If at least one of those algorithms is fast, researchers call the problem easy. If nobody has discovered any fast algorithms for solving a problem, that suggests the problem is hard.

Many important computational problems fall into this latter category. For decades, researchers have sought fast algorithms for such problems and have come up empty-handed, but they haven't been able to categorically rule out the possibility of undiscovered fast algorithms. So are these problems really hard or not? 

That's the essence of the P versus NP problem, a fascinating question cursed with an exceptionally dull name. It's really a question about two broad classes of computational problems: "P" refers to the class of all easy problems, while "NP" is the class of problems that may or may not be easy to solve, but do have fast algorithms for checking whether a candidate solution is correct. The million-dollar question is whether these two classes are actually equivalent — that is, whether there's an easy way to solve every problem whose solution is easily checkable.

Put this way, the question sounds simple. But in fact, the landscape of computational hardness is far richer than complexity theorists imagined when they first posed the P versus NP problem.
 

What's New and Noteworthy

Complexity theorists have now struggled with the P versus NP question for half a century, and the answer remains elusive — it seems that this problem about the hardness of computational problems is itself hard to solve. 

That self-referential character hasn't escaped researchers. Last summer I took a deep dive into the mind-bending world of meta-complexity: the study of the hardness of computational problems that are themselves about the hardness of computational problems. "If you're confused about it, don't worry — I'm also confused," the complexity theorist Igor Oliveira reassured me during an interview. In recent years, researchers have discovered that seemingly arcane meta-complexity problems are intimately tied to the question of whether any kind of encryption can be truly secure.

There's more to complexity theory than the distinction between easy and hard problems — researchers also explore many different flavors of computational hardness. Sometimes, that means studying subtler differences in difficulty between NP problems. For a handful of especially frustrating problems, researchers haven't discovered any algorithms better than simply checking every possible solution. Last month, I wrote about how two teams of researchers discovered a slightly faster algorithm for a problem long thought to fall into this category. The new algorithm works by adapting techniques developed to attack encryption protocols — a nice illustration that ideas can flow in both directions between complexity theory and cryptography.

Other researchers explore problems that are much, much harder than those in NP — almost to the point of absurdity. In these cases, there's no obvious way to check whether a proposed solution is correct, or even to enumerate all the possibilities. Recently, researchers proved that an innocent-sounding problem is actually among the hardest (except for ones that are literally unsolvable). Such an outlandish problem might seem far removed from any practical applications, but it shows up in the study of concurrent computing, in which programs divide tasks into many small parts and work on them simultaneously.

These are just a few of the recent results in complexity theory — researchers in the field have also explored the uses of randomness, the nature of mathematical proof and more. "P versus NP is just like our cornerstone somewhere in the middle of a temple," the mathematician and complexity theorist Alexander Razborov told me last year. "The temple is huge, and it produces lots of cool stuff."

AROUND THE WEB
MIT Technology Review published a lively account of the history of P versus NP on the occasion of the problem's 50th anniversary in 2021.
Communications of the Association for Computing Machinery published an article by the complexity theorist Lance Fortnow exploring the strategies that computer scientists use to attack hard problems in practice.
The computer scientist Scott Aaronson wrote a fascinating paper about how complexity theory can inform the study of long-standing questions in philosophy and other branches of science. [Editor's note: Aaronson is a member of Quanta Magazine's advisory board.]
Follow Quanta
Facebook
Twitter
YouTube
Instagram
Simons Foundation

160 5th Avenue, 7th Floor
New York, NY 10010

Copyright © 2024 Quanta Magazine, an editorially independent division of Simons Foundation

Scientist Pankaj

Today in Science: Humans think unbelievably slowly

...