Thirty years ago, researchers discovered that harnessing the laws of quantum physics could enable a new kind of computer. Ordinary classical computers like the device you're using right now represent information using discrete bits, each of which can be in one of two possible states at any given time. Their quantum counterparts instead work with quantum bits, or qubits, which can also access peculiar combinations of two distinct states. This extra flexibility isn't always helpful, but for certain important problems like factoring large numbers it can yield a dramatic speed boost over the best classical algorithms. To date, however, computer scientists have had only limited success in realizing the potential of quantum computing. Part of the problem is that the desired computational speedups are hard to achieve in practice — they rely on delicate quantum effects that are easily disrupted by noise leaking in from a quantum computer's environment. Researchers are working to implement elaborate error-correction procedures that will eliminate this noise, but to make those procedures work they'll need far larger machines than the ones they've built so far. An unexpected problem, though, is that for many potential applications, classical computers have proved tough to beat. A high-profile result from last summer offers a case in point. What's New and Noteworthy In June 2023, the quantum computing team at IBM used a prototype quantum computer to solve a tricky physics problem about the behavior of magnetic materials. It was an impressive technical achievement, enabled by new techniques for mitigating errors. The IBM team also compared their quantum calculation to a classical simulation running on a supercomputer, which took much longer to reach an answer and became less accurate as they made the problem progressively more complex. Classical computers, they claimed, wouldn't be able to keep up. That claim didn't survive long. Two weeks later, an independent team demonstrated a fast classical simulation running on a laptop, and six other classical simulations using different techniques soon followed. If last summer's IBM saga sounds familiar, it's probably because a similar sequence of events unfolded in 2019, when Google announced that it had achieved "quantum supremacy," a symbolic milestone marking the first time a quantum computer accomplished a task beyond the reach of any classical computer. Like IBM's more recent work, Google's result was an experimental demonstration showcasing the power of present-day quantum computers, which are too error-prone for most practical applications. In 2019, though, it was IBM arguing that results like Google's could also be achieved with classical simulations. Most researchers expect that experimental evidence for a quantum advantage will become far less ambiguous as quantum computing technology improves. But even on a theoretical level, not all proposed quantum speedups have held up to scrutiny. In 2018, an 18-year-old named Ewin Tang developed an ultra-fast classical algorithm for a version of the problem that streaming services must solve when recommending movies to their users. Tang's result grew out of her attempts to prove that no classical algorithm could match the speed of a recently discovered quantum algorithm for the problem. Instead, by adapting some aspects of the quantum algorithm to classical computation, she proved just the opposite. The recommendation problem that Tang tackled was one of many machine learning problems for which researchers have sought a quantum speed boost, and most of those searches have come up short. Recently, though, researchers have approached the intersection of machine learning and quantum computing from the opposite direction. Several papers have shown that classical machine learning algorithms can be surprisingly good at inferring many properties of quantum systems if paired with the right kind of quantum data. It's not yet clear how much quantum complexity can be captured by this hybrid quantum-classical approach, but early results have been very promising. The mixed results from these various efforts don't mean that quantum computing researchers should despair. There's still strong theoretical evidence for a quantum advantage in tasks like factoring, and researchers continue to discover new tasks at which quantum computers excel (though these are often abstract problems with no practical application). Quanta covered one prime example in 2022. But it's good to remember that even as researchers plumb the mysteries of quantum computation, plain old classical computers are still full of surprises. |