Monday, August 5, 2024

The Math That Keeps Messages Error-Free

Math and Science News from Quanta Magazine
View this email in your browser
Each week Quanta Magazine explains one of the most important ideas driving modern research. This week, computer science staff writer Ben Brubaker explains how a bit of redundancy can help protect a message from errors.

 

The Math That Keeps Messages Error-Free

By BEN BRUBAKER

Long-distance communication is rife with pitfalls. That's the basic lesson of the children's game of telephone, in which a message passed from kid to kid changes slightly on each leg of its journey and ends up radically transformed. It's also a challenge for real phone calls: The digital signals that encode our words must compete with all sorts of noise en route to their destination. But actual telephone conversations rarely devolve into games of telephone. Why is that?
 
The key to this puzzle is a mathematical technique called error correction, which was first proposed in the 1940s by the pioneering computer scientist Claude Shannon. Shannon's insight was simple: To guarantee that a message is faithfully transmitted, start by rewriting it in a form that makes subsequent corruption easy to spot and reverse.
 
Researchers have studied many error-correction methods, called error-correcting codes. To understand the basic idea, imagine that the message you want to send is a sequence of bits, or 0s and 1s. As this message passes through a noisy environment, a few of those bits might flip from 0 to 1 or vice versa. The simplest defense against these errors, called the repetition code, replaces every 0 in the original message with a 000 and every 1 with a 111. The recipient of a message encoded this way can spot errors by scanning for sections without three identical bits in a row. If errors are rare enough, then a sequence like 100 is more likely to have started as a 000 than as a 111, and the recipient can probably recover the original message by taking the majority vote within each group of three bits.
 
The repetition code is simple, but not very practical. One problem is that it's very inefficient: Encoding a message makes it three times longer. A second problem is that it quickly becomes unreliable as errors grow more common. The only way to make the code more tolerant of errors is to increase the number of times you repeat each bit, but that makes it even less efficient.
 
Fortunately, computer scientists have discovered codes that fare much better. Among the most famous ones are the Reed-Solomon codes, which are based on the properties of mathematical curves called polynomials. Variants of these codes are used everywhere from CDs to deep-space communication. Jordana Cepelewicz described how they work in a 2022 explainer for Quanta.
 
Nearly 80 years after Shannon's proposal, error-correcting codes are ubiquitous in digital communication systems. Meanwhile, computer scientists continue to study the theory underlying more exotic codes.
 

What's New and Noteworthy
Researchers have long been especially interested in codes with a property called local testability. In a locally testable code, you can detect an error in any bit by checking the coded message in just a handful of places, no matter how the errors are distributed throughout the message. For decades, computer scientists believed that local testability would always come at the cost of efficiency. Then, in 2021, five researchers finally figured out how to build an ideal locally testable code. Mordechai Rorvig covered that breakthrough and a mathematically equivalent result by two researchers studying error correction in quantum computers.
 
Locally testable codes make it easy to spot errors, but not necessarily to fix them. Researchers have also studied locally correctable codes, in which it's possible to fix an error in any bit using just a few other bits. In a January article, I compared this process to recovering any page torn out of a book by just glancing at a few others. That may sound too good to be true, and unfortunately it is: My article was about two researchers who proved that any code with the most extreme version of local correctability must be very inefficient.
 
Error-correcting codes also crop up in areas of mathematics that seem far removed from computer science. Consider the problem of how to cram tennis balls into a box to minimize the amount of wasted space. Higher-dimensional versions of this "sphere-packing" problem are actually intimately tied to error correction. Natalie Wolchover unpacked that connection in a 2013 article. And in 2017, Kevin Hartnett described how error correction relates to another question in high-dimensional geometry.
 
Such connections to other branches of math and computer science appear often in the study of error correction. "Over time, we have realized that codes have a life beyond communication," the theoretical computer scientist Madhu Sudan told me. "We're starting to understand what this playground looks like."

AROUND THE WEB

Scientific American published a blog post by Evelyn Lamb exploring how ideas from error correction manifest in everyday verbal communication.

The 3Blue1Brown channel on YouTube has a great explainer about the first practical error-correcting codes, invented by the computer scientist Richard Hamming in 1950.

The Simons Institute for the Theory of Computing hosted a fun and engaging public lecture about error correction by the theoretical computer scientist Mary Wootters.

The Simons Institute for the Theory of Computing was established with a grant from the Simons Foundation, which also funds this editorially independent publication. Simons Foundation funding decisions have no influence on Quanta's coverage.

Follow Quanta
Facebook
Twitter
YouTube
Instagram
Simons Foundation

160 5th Avenue, 7th Floor
New York, NY 10010

Copyright © 2024 Quanta Magazine, an editorially independent division of Simons Foundation

Scientist Pankaj

Today in Science: Your brain really does choke under pressure

...