Monday, September 9, 2024

How Space and Time Shape Algorithms

Math and Science News from Quanta Magazine
View this email in your browser
Each week Quanta Magazine explains one of the most important ideas driving modern research. This week, computer science staff writer Ben Brubaker explains how the study of limited resources strengthens computer science.

 

How Space and Time Shape Algorithms

By BEN BRUBAKER

Do you ever strategize about ways to speed up your commute, repack a suitcase to free up some room, or tweak a recipe to get food on the table faster? That's you thinking like a computer scientist. Physicists and philosophers may be content to ponder the nature of space and time, but theoretical computer scientists treat space and time as scarce resources, and try to cook up ways to make the most of them. 

Constraints on space and time are central to how researchers think about algorithms, which are just step-by-step procedures for solving particular problems. The formal study of algorithms goes back to the pioneering work of Alan Turing and other mathematicians in the 1930s. Back then, they were content to prove that algorithms for solving specific problems existed; they weren't necessarily thinking about how easy those procedures would be to implement, in part because general-purpose computers didn't yet exist. 

Decades passed before computer scientists learned to take time seriously. In the 1960s, they began to realize that the speed of algorithms says something about the difficulty of the problems they solve. The distinction between easy problems (which have fast algorithms) and hard problems (which don't) is now at the heart of theoretical computer science, as I discussed in the May 20 Fundamentals newsletter. Space constraints can be just as important. Of course, algorithms don't occupy space in the real world, but they always need to store data, and that information takes up space in a computer's memory. It is, like time, a finite resource.

Algorithm designers often find themselves in situations in which they'd like to minimize runtime and memory usage but can't do both at once. Such "space-time trade-offs" also arise in everyday life. Suppose you've grown frustrated with the time you spend searching through your dresser in the morning for the shirt you know is in there somewhere. If you replace the dresser with a set of clearly labeled boxes, put one article of clothing in each box, and organize those boxes judiciously, you'll find what you're looking for almost instantly — but leave yourself very little space to sleep. At the opposite extreme, you could toss the boxes and store all your clothes in a compact pile in the corner. They'll take up very little space, but it'll take you forever to rifle through them every morning. On second thought, maybe the dresser "algorithm" is a good compromise.

What's New and Noteworthy
Computer scientists are hardly authorities on clothing storage (at least most of them aren't). But they've been grappling with questions about data storage for over half a century. In 1953, an IBM researcher invented a data storage method, called a hash table, that achieved an attractive balance between storage space and the speed of data retrieval. Last year, researchers finally proved that a specific hash table design yields the best possible space-time trade-off. Steve Nadis reported on the breakthrough for Quanta.

Runtime and memory usage are important considerations for all algorithms, but questions about space and time can be even more literal. Many of the most famous problems that computer scientists study are themselves about spatial relationships: A classic example is the "nearest neighbor" problem, in which the goal is to find the closest point to a given point in a data set. There are many different ways to measure distance, and algorithms designed to solve one version of the problem don't always work for others. In 2018, Kevin Hartnett reported on a new general-purpose algorithm for the nearest-neighbor problem that works for any distance measure.

Computer scientists also study problems in which the passage of time plays a critical role. Algorithms that need to adjust on the fly, called online algorithms, have an intrinsic disadvantage compared to ones that have all relevant information in advance. Fire departments, for instance, need to make decisions about which truck to dispatch each time they get a call, without knowing where the next call will come from. Researchers have long sought to quantify the cost of not being able to predict the future in such problems. Thirty years ago, some thought they had the answer for a particular version of the problem, but they weren't able to prove it. Last year, Madison Goldberg reported on a new proof that this 30-year-old conjecture is false: There are scenarios where online algorithms fare even worse than computer scientists suspected.

In computer science — as in real life — space and time often seem to be in short supply. Working with limited resources helps researchers design better algorithms and understand the nature of computation itself.

AROUND THE WEB

Space-time trade-offs aren't the only features of algorithm design that also arise in daily life. The book Algorithms to Live By, by Brian Christian and Tom Griffiths, is a lively and wide-ranging tour of how computer science can shed light on real-world decisions far less contrived than my dresser problem. An excerpt is available online. 

Harvard University's introductory computer science course CS50 offers a nice explanation of hash tables using the example of a contact list full of Nintendo characters. The lectures are free to watch on YouTube

The computer scientist Scott Aaronson wrote an intriguing paper, and an accompanying blog post, exploring how the landscape of computational difficulty would change in a world where time travel is possible. (Aaronson is a member of Quanta's advisory board.)

Follow Quanta
Facebook
Twitter
YouTube
Instagram
Simons Foundation

160 5th Avenue, 7th Floor
New York, NY 10010

Copyright © 2024 Quanta Magazine, an editorially independent division of Simons Foundation

Scientist Pankaj

Today in Science: Your brain really does choke under pressure

...