Take, for example, Albert Einstein, who passed away in 1955, 60 years before his equations’ most stunning consequence was confirmed: Space-time has periodic ripples — gravitational waves — that can carry energy across billions of light-years.

Since that September 2015 black hole collision, the Laser Interferometer Gravitational-Wave Observatory (LIGO) team has reported five more events (a sixth fell just short of the standard of significance). But the LIGO data is still virgin territory. It is an entirely new way of decoding the universe, and physicists must develop methods of data analysis along with the measurements.

It’s not a simple task. Measuring gravitational waves is not the kind of discovery you make by accident. But now that they have the data, physicists have been able to extract insights about the astrophysics of black holes and neutron stars, including their location, composition and masses. They’ve measured the expansion of the universe and made new precision tests of Einstein’s general theory of relativity. The theory has passed all tests — so far.

But the same measurements that have so spectacularly confirmed Einstein’s theory could also, perhaps, reveal where it goes wrong.

Physicists know that general relativity breaks down close to a black hole’s center. Yet the center of a black hole is, famously, a place where we can never look. It’s protected by the black hole’s horizon — the surface surrounding the black hole from which light can never escape. In general relativity, the black hole horizon has no substance; it poses no obstacle. The black hole simply swallows whatever dares to pass the horizon.

Most physicists believe that general relativity correctly describes the horizons of black holes. Yet some have argued that contradictions between general relativity and quantum theory mean that something else could be going on. In particular, the claim that black holes are surrounded by a “firewall,” though controversial, has spurred work on alternative descriptions of the horizon.

If the horizon of a black hole is obstructed by something like a firewall, then the horizon could potentially reflect gravitational waves. If that was so, then LIGO should see evidence for these modifications. In particular, a collision between two black holes should produce an echo.

That’s the basic idea put forward by two independent groups of physicists, one led by Vítor Cardoso and the other by Niayesh Afshordi. Using simple models for a horizon with substance, the researchers showed that some of the gravitational waves emitted by a black hole collision should reflect back toward the black hole’s center. The waves would then reflect outwards again, where some would then once again be reflected at the horizon. The black hole would act like a resonant cavity with a semitransparent mirror at one end. It would emit periodic signals with decreasing amplitude. It would echo.

So much for the theory. What about the data? In late 2016, Afshordi and collaborators applied a custom-designed analysis to the publicly available LIGO data and looked for evidence of echoes. Amazingly enough, they found echoes just where they sought them. The signal was not highly significant — the researchers estimated a 1 in 100 chance that the signal was just noise — but a signal nevertheless.

Theirs was a big claim, a daring claim. If correct, it would be evidence for the failure of Einstein’s theory.

A few weeks after Afshordi’s group published their paper, members of the LIGO collaboration put out a reply, raising concerns about Afshordi’s analysis. The LIGO team then set out to do their own study. But large collaborations work slowly, and so it took more than a year until they were able to finish the analysis and get the collaboration’s approval to publish the work.

The LIGO analysis is now available. The researchers found the echo, but at a lower statistical significance than before. They concluded that there’s a 1 in 50 chance that the echo is merely noise. Moreover, the study found the strongest evidence for an echo in one particular event — the event that itself has the lowest significance. When they remove this event from the sample, the probability that the echo isn’t real goes up to almost 20 percent.

For this data analysis to be doable at all, the physicists must make assumptions about the signal that they search for. Inspired by Afshordi’s model, researchers assumed that the echoes should come at regular intervals, that they decay exponentially, and that they remain unmodified (aside from the decrease of amplitude). In many regards, they’re searching for the simplest possible echo.

The theorists could now review the data analysis and develop hypotheses that fit the data better. But reanalyzing the same data over and over again carries a big risk: Instead of developing a better theory, they could merely find a way to better amplify noise.

The more types of echoes they look for, the more likely they are to find something. But these repeated attempts will render measures of statistical significance unreliable.

The only way to overcome this impasse is fresh data. It will take many more iterations of this exchange between theory and experiment before the case can be settled.

So far, both the experimentalists and the theorists have found the exchange fruitful. “We are going ahead full force with modeling the echoes theoretically, and finding better ways to search for them,” said Afshordi, the author of the original study. He also pointed out that another group has found evidence for echoes in the LIGO data, claiming a less than 1 percent risk of a false positive.

Meanwhile Ofek Birnholtz, a researcher with the LIGO collaboration, said that “there have certainly been tensions,” but the idea that black holes echo is “without a doubt worth looking further into.” A search for black hole echoes has become one of the official goals of the LIGO Scientific Collaboration.

We all dream the same dream, here in theoretical physics.

]]>The most popular narrative puts oxygen front and center. The geological record shows a clear link, albeit an often subtle and complicated one, between rises in oxygen levels and early animal evolution. As *Quanta *reported earlier this month, many researchers argue that this suggests low oxygen availability had been holding greater complexity at bay — that greater amounts of oxygen were needed for energy-demanding processes like movement, predation and the development of novel body plans with intricate morphologies.

“It’s a very attractive, intuitive explanation,” said Nicholas Butterfield, a paleobiologist at the University of Cambridge. “And it’s wrong.”

Butterfield — “a lone voice in the wilderness,” he calls himself — has what many others might consider an unusual take on the oxygen story. He’s essentially turned the idea on its head. According to his theory, changes in environmental conditions weren’t the cause, but rather the consequence, of animals migrating and perturbing their surroundings. “We have to appreciate that animals have a powerful, powerful impact on the carbon cycle and on how everything goes around,” he said.

In a paper published in the January issue of *Geobiology*, Butterfield braided fluid dynamics and ecology to present his case for animals driving oxygenation instead of the other way around. First, he argued, if there was enough oxygen to power unicellular eukaryotes 1.6 billion years ago — which was indeed the case — then there would have been enough to run a whole assortment of animals. He believes early multicellular organisms would have consisted of flagellated cells moving in unison, collectively whipping their appendages to create currents that would have made it easier for them to obtain dissolved oxygen. “I make the case that if there’s enough oxygen to run a single-celled eukaryote, there’s enough oxygen to run a fish,” Butterfield said. “So oxygen limitation cannot be invoked to explain the billion-year delay in the evolution of animals.”

Instead, his hypothesis focuses on diurnal vertical migration, a daily process during which sundry organisms, ranging in size and complexity from zooplankton and sponges to fish and squids, migrate between shallow and deeper waters to find food and avoid predators. By feeding up above and metabolizing down below, the animals scrub and help ventilate the ocean, raising oxygen concentrations at the surface while driving anoxic regions to greater depths. This redistribution of oxygen would also have improved the transparency of the water column, allowing light to penetrate farther down and escalating predators’ reliance on vision at deeper and deeper levels when hunting. The subsequent evolution of larger, deeper-diving visual predators would then have pushed the “oxygen minimum zones” to even lower depths, creating a feedback loop.

Eventually, this cascading interplay between animals’ inadvertent re-engineering of ocean structure and their adaptive responses to those changes reached a tipping point. “The system went critical,” in Butterfield’s words, resulting in the sudden eruption of animal diversity and complexity during the Cambrian.

The delayed appearance of animals in the ocean was therefore not caused by a lack of oxygen, according to Butterfield, but rather because blind Darwinian evolution needed time to arrive at that tipping point. “The gene regulatory network to build an animal is the most complex algorithm that evolution has ever produced,” he said. “And it’s only ever happened once, it’s only ever happened once in land plants,” which he points out are the only other lineage of organisms to have derived differentiated tissues, organs and organ systems. “And that took even longer. It followed the evolution of animals by another 100 million years.”

Not everyone is convinced. Timothy Lyons, a geologist at the University of California, Riverside, thinks that multiple independent lines of evidence point to oxygen in the environment as the trigger for the evolutionary cascade Butterfield describes. For example, most major extinction events were tied to low oxygen, he said, and oxygen levels fluctuated throughout the time leading up to the Cambrian (as well as in later eras). Those periods of lower oxygen “planted the seeds” for innovations that allowed certain organisms to take advantage of oxygen more efficiently. When oxygen later rebounded, natural selection would have favored those adaptations and enabled animals with them to blossom and diversify.

Moreover, Lyons and Charles Diamond, a graduate student in Lyons’ lab, find key pieces of evidence to counter Butterfield’s story. They have identified other conditions, not attributable to animals, that would have caused increases in oxygen at exactly the same time as the rapid animal diversification events Butterfield cited. An enormous variety of large fish emerged later, for example, during the Devonian Period (the “Age of Fishes” that started about 419 million years ago), when trees and other vascular plants arose on the continents. Those land plants by themselves greatly increased the amount of oxygen in both the atmosphere and the ocean, Lyons and Diamond said. The timing of the two events casts doubt on Butterfield’s claim that it was newly evolved fish causing the rise in oxygen and not vice versa, Diamond added. “Otherwise, it would have been too much of a coincidence.”

Butterfield disagrees. “Yes, the rise of vascular land plants may have impacted oxygen availability. That’s the standard textbook view,” he wrote in an email. “But it’s based on a bunch of unestablished assumptions” — such as that atmospheric oxygen was previously too low (he thinks the proxy-based measurements of atmospheric oxygen are intrinsically flawed, and that oxygen could have been much higher than estimated) and that it would have been the limiting factor in how large fish evolved. “I am arguing that none of these hold water.”

Notwithstanding those disagreements, Lyons and Diamond do find Butterfield’s ideas — that the evolution of such great complexity was the result of intrinsic biological development, what Butterfield has called an “evolutionary random walk” — to be much more feasible in the case of land plants. Their emergence “couldn’t have been a response to oxygen or carbon dioxide,” Lyons said. But he and Diamond don’t think that explanation can be fairly applied to animals.

For now, Butterfield wants to obtain further support for his theory by looking at how modern extinctions affect vertical migration. As the largest, deepest-diving predators are wiped out, the oxygen minimum zones should rise and precipitate further extinctions, he said. That’s something he can explore in modern oceans, as global climate change wreaks havoc on marine ecosystems.

As for what may have happened millions of years ago during the Cambrian — “Well, at this point the relationship between oxygen and animals is clear,” Lyons said, “but it goes back to the classic chicken-or-the-egg argument.”

]]>Langlands, 81, an emeritus professor at the Institute for Advanced Study in Princeton, New Jersey, is the progenitor of the “Langlands program,” which explores a deep connection between two pillars of modern mathematics: number theory, which studies arithmetic relationships between numbers, and analysis, which is an advanced form of calculus. The link has far-reaching consequences that mathematicians have used to answer centuries-old questions about the properties of prime numbers.

Langlands first articulated his vision for the program in 1967 — when he was 30 — in a letter to the famed mathematician André Weil. He opened the 17-page missive with a now-legendary stroke of modesty: “If you are willing to read it as pure speculation, I would appreciate that,” he wrote. “If not — I am sure you have a waste basket handy.”

Since then, generations of mathematicians have taken up and expanded upon his vision. The Langlands program now ranges over so many different fields that it is often referred to as the search for a “grand unified theory” of mathematics.

“It’s revolutionary, I think, as far as the history of mathematics is concerned,” said James Arthur, a mathematician at the University of Toronto and former student of Langlands’.

Mathematicians have always been interested in finding patterns in prime numbers — those numbers that are divisible only by one and themselves. Primes are like the atomic elements of number theory, the fundamental pieces from which the study of arithmetic is built. There are an infinite number of them, and they appear to be scattered randomly among all numbers. To find patterns in primes — like how frequently they occur (which is the subject of the famous Riemann hypothesis) — it’s necessary to relate them to something else. Seen correctly, the primes act like a cipher, which turns into a beautiful message when read through the right key.

“They look like random accidents, but especially through the Langlands program, it’s turning out they have an extremely complex structure that relates them to all sorts of other things,” said Arthur.

One question about the structure of primes is which primes can be expressed as a sum of two squares. The first few examples include:

5, a prime number that equals 2^{2} + 1^{2},

13, which equals 3^{2} + 2^{2}, and

29, which equals 5^{2} + 2^{2}.

In the 17th century, number theorists discovered that all primes that can be expressed as a sum of two squares share another property: They leave a remainder of 1 when divided by 4. The work began to reveal a hidden structure to the primes. Then in the late 18th century, Carl Friedrich Gauss generalized this surprising link, formulating a “reciprocity” law that linked certain primes (those that are a sum of two squares) to an identifying characteristic (when divided by 4, they leave a remainder of 1).

In his letter, Langlands proposed a vast extension of the kind of reciprocity law Gauss had discovered. Gauss’s work applied to quadratic equations — those with exponents no higher than the number 2. Langlands suggested that the prime numbers encoded in higher-degree equations (like cubic and quartic equations) should be in a reciprocity relationship with the far-off mathematical land of harmonic analysis, which grows out of calculus and is frequently used to solve problems in physics.

For example, scientists in the 19th century were surprised to discover that when they looked at starlight through a prism, they didn’t find a continuous spectrum of colors. Instead, the spectrum was interrupted here and there by black lines, now called absorption spectra, where the light was missing. Eventually the scientists realized that the missing light had been absorbed by elements in the stars. This discovery provided solid evidence that the stars and our planet are made from the same material.

At the same time, the spectral lines became objects of mathematical interest. The missing wavelengths gave a sequence of numbers — the frequencies of the absent light. Mathematicians could study those numbers through analysis. Or they could work on wholly new kinds of equations — inspired, perhaps, by questions in physics, but arising purely from analysis and geometry. Based on those new equations, they could study a parallel notion of absorption spectra.

The Langlands program relates prime number values of polynomial equations to spectra from the differential equations studied in analysis and geometry. It says that there should be a reciprocity relationship between the two. As a result, you should be able to characterize which prime numbers appear in specific settings by understanding which numbers appear in the corresponding spectra.

The two sets of numbers can’t be compared directly, though. They each have to be translated through different kinds of mathematical objects. In particular, Galois representations, which are based on primes, should pair with objects called automorphic forms, which contain the relevant spectra.

Today mathematicians working in the Langlands program are trying to prove that relationship and many other related conjectures. At the same time, they’re using Langlands-type connections to solve problems that would otherwise seem out of reach. The most celebrated result in this regard is Andrew Wiles’s proof in 1995 of Fermat’s Last Theorem. Wiles’s proof depended in part on exactly the type of relationship between number theory and analysis that Langlands had predicted decades earlier.

The Langlands program has expanded considerably over the years. Yet when you push aside all the complex machinery that’s been created to realize Langlands’ vision, you see that the whole massive enterprise remains motivated by some of the most basic of mathematical concerns.

“Understanding the properties of which primes occur in an equation basically amounts to a fundamental classification of the arithmetic world,” said Arthur.

]]>Never mind that Erdős doubted God’s very existence. “You don’t have to believe in God, but you should believe in The Book,” Erdős explained to other mathematicians.

In 1994, during conversations with Erdős at the Oberwolfach Research Institute for Mathematics in Germany, the mathematician Martin Aigner came up with an idea: Why not actually try to make God’s Book — or at least an earthly shadow of it? Aigner enlisted fellow mathematician Günter Ziegler, and the two started collecting examples of exceptionally beautiful proofs, with enthusiastic contributions from Erdős himself. The resulting volume, *Proofs From THE BOOK**,* was published in 1998, sadly too late for Erdős to see it — he had died about two years after the project commenced, at age 83.

“Many of the proofs trace directly back to him, or were initiated by his supreme insight in asking the right question or in making the right conjecture,” Aigner and Ziegler, who are now both professors at the Free University of Berlin, write in the preface.

The book, which has been called “a glimpse of mathematical heaven,” presents proofs of dozens of theorems from number theory, geometry, analysis, combinatorics and graph theory. Over the two decades since it first appeared, it has gone through five editions, each with new proofs added, and has been translated into 13 languages.

In January, Ziegler traveled to San Diego for the Joint Mathematics Meetings, where he received (on his and Aigner’s behalf) the 2018 Steele Prize for Mathematical Exposition. “The density of elegant ideas per page is extraordinarily high,” the prize citation reads.

*Quanta Magazine* sat down with Ziegler at the meeting to discuss beautiful (and ugly) mathematics. The interview has been edited and condensed for clarity.

We’ve always shied away from trying to define what is a perfect proof. And I think that’s not only shyness, but actually, there is no definition and no uniform criterion. Of course, there are all these components of a beautiful proof. It can’t be too long; it has to be clear; there has to be a special idea; it might connect things that usually one wouldn’t think of as having any connection.

For some theorems, there are different perfect proofs for different types of readers. I mean, what is a proof? A proof, in the end, is something that convinces the reader of things being true. And whether the proof is understandable and beautiful depends not only on the proof but also on the reader: What do you know? What do you like? What do you find obvious?

These are things that are central in mathematics, so it’s important to understand them from many different angles. There are theorems that have several genuinely different proofs, and each proof tells you something different about the theorem and the structures. So, it’s really valuable to explore these proofs to understand how you can go beyond the original statement of the theorem.

An example comes to mind — which is not in our book but is very fundamental — Steinitz’s theorem for polyhedra. This says that if you have a planar graph (a network of vertices and edges in the plane) that stays connected if you remove one or two vertices, then there is a convex polyhedron that has exactly the same connectivity pattern. This is a theorem that has three entirely different types of proof — the “Steinitz-type” proof, the “rubber band” proof and the “circle packing” proof. And each of these three has variations.

Any of the Steinitz-type proofs will tell you not only that there is a polyhedron but also that there’s a polyhedron with integers for the coordinates of the vertices. And the circle packing proof tells you that there’s a polyhedron that has all its edges tangent to a sphere. You don’t get that from the Steinitz-type proof, or the other way around — the circle packing proof will not prove that you can do it with integer coordinates. So, having several proofs leads you to several ways to understand the situation beyond the original basic theorem.

I think it always depends on what you know and where you come from.

An example is László Lovász’s proof for the Kneser conjecture, which I think we put in the fourth edition. The Kneser conjecture was about a certain type of graph you can construct from the *k*-element subsets of an *n*-element set — you construct this graph where the *k*-element subsets are the vertices, and two *k*-element sets are connected by an edge if they don’t have any elements in common. And Kneser had asked, in 1955 or ’56, how many colors are required to color all the vertices if vertices that are connected must be different colors.

It’s rather easy to show that you can color this graph with *n *– *k *+ 2 colors, but the problem was to show that fewer colors won’t do it. And so, it’s a graph coloring problem, but Lovász, in 1978, gave a proof that was a technical tour de force, that used a topological theorem, the Borsuk-Ulam theorem. And it was an amazing surprise — why should this topological tool prove a graph theoretic thing?

This turned into a whole industry of using topological tools to prove discrete mathematics theorems. And now it seems inevitable that you use these, and very natural and straightforward. It’s become routine, in a certain sense. But then, I think, it’s still valuable not to forget the original surprise.

I think there could be, but no human will ever find it.

We have these results from logic that say that there are theorems that are true and that have a proof, but they don’t have a short proof. It’s a logic statement. And so, why shouldn’t there be a proof in God’s Book that goes over a hundred pages, and on each of these hundred pages, makes a brilliant new observation — and so, in that sense, it’s really a proof from The Book?

On the other hand, we are always happy if we manage to prove something with one surprising idea, and proofs with two surprising ideas are even more magical but still harder to find. So a proof that is a hundred pages long and has a hundred surprising ideas — how should a human ever find it?

But I don’t know how the experts judge Andrew Wiles’ proof of Fermat’s Last Theorem. This is a hundred pages, or many hundred pages, depending on how much number theory you assume when you start. And my understanding is that there are lots of beautiful observations and ideas in there. Perhaps Wiles’ proof, with a few simplifications, is God’s proof for Fermat’s Last Theorem.

But it’s not a proof for the readers of our book, because it’s just beyond the scope, both in technical difficulty and layers of theory. By definition, a proof that eats more than 10 pages cannot be a proof for our book. God — if he exists — has more patience.

Paul Erdős referred to his own lectures as “preaching.” But he was an atheist. He called God the “Supreme Fascist.” I think it was more important to him to be funny and to tell stories — he didn’t preach anything religious. So, this story of God and his book was part of his storytelling routine.

It’s a powerful feeling. I remember these moments of beauty and excitement. And there’s a very powerful type of happiness that comes from it.

If I were a religious person, I would thank God for all this inspiration that I’m blessed to experience. As I’m not religious, for me, this God’s Book thing is a powerful story.

You know, the first step is to establish the theorem, so that you can say, “I worked hard. I got the proof. It’s 20 pages. It’s ugly. It’s lots of calculations, but it’s correct and it’s complete and I’m proud of it.”

If the result is interesting, then come the people who simplify it and put in extra ideas and make it more and more elegant and beautiful. And in the end you have, in some sense, the Book proof.

If you look at Lovász’s proof for the Kneser conjecture, people don’t read his paper anymore. It’s rather ugly, because Lovász didn’t know the topological tools at the time, so he had to reinvent a lot of things and put them together. And immediately after that, Imre Bárány had a second proof, which also used the Borsuk-Ulam theorem, and that was, I think, more elegant and more straightforward.

To do these short and surprising proofs, you need a lot of confidence. And one way to get the confidence is if you know the thing is true. If you know that something is true because so-and-so proved it, then you might also dare to say, “What would be the really nice and short and elegant way to establish this?” So, I think, in that sense, the ugly proofs have their role.

The third edition was perhaps the first time that we claimed that that’s it, that’s the final one. And, of course, we also claimed this in the preface of the fifth edition, but we’re currently working hard to finish the sixth edition.

When Martin Aigner talked to me about this plan to do the book, the idea was that this might be a nice project, and we’d get done with it, and that’s it. And with, I don’t know how you translate it into English, *jugendlicher Leichtsinn* — that’s sort of the foolery of someone being young — you think you can just do this book and then it’s done.

But it’s kept us busy from 1994 until now, with new editions and translations. Now Martin has retired, and I’ve just applied to be university president, and I think there will not be time and energy and opportunity to do more. The sixth edition will be the final one.

]]>And bet they did. In 1991, Hawking and Kip Thorne bet Preskill that information that falls into a black hole gets destroyed and can never be retrieved. Called the black hole information paradox, this prospect follows from Hawking’s landmark 1974 discovery about black holes — regions of inescapable gravity, where space-time curves steeply toward a central point known as the singularity. Hawking had shown that black holes are not truly black. Quantum uncertainty causes them to radiate a small amount of heat, dubbed “Hawking radiation.” They lose mass in the process and ultimately evaporate away. This evaporation leads to a paradox: Anything that falls into a black hole will seemingly be lost forever, violating “unitarity” — a central principle of quantum mechanics that says the present always preserves information about the past.

Hawking and Thorne argued that the radiation emitted by a black hole would be too hopelessly scrambled to retrieve any useful information about what fell into it, even in principle. Preskill bet that information somehow escapes black holes, even though physicists would presumably need a complete theory of quantum gravity to understand the mechanism behind how this could happen.

Physicists thought they resolved the paradox in 2004 with the notion of black hole complementarity. According to this proposal, information that crosses the event horizon of a black hole both reflects back out and passes inside, never to escape. Because no single observer can ever be both inside and outside the black hole’s horizon, no one can witness both situations simultaneously, and no contradiction arises. The argument was sufficient to convince Hawking to concede the bet. During a July 2004 talk in Dublin, Ireland, he presented Preskill with the eighth edition of *Total Baseball: The Ultimate Baseball Encyclopedia*, “from which information can be retrieved at will.”

Thorne, however refused to concede, and it seems he was right to do so. In 2012, a new twist on the paradox emerged. Nobody had explained precisely how information would get out of a black hole, and that lack of a specific mechanism inspired Joseph Polchinski and three colleagues to revisit the problem. Conventional wisdom had long held that once someone passed the event horizon, they would slowly be pulled apart by the extreme gravity as they fell toward the singularity. Polchinski and his co-authors argued that instead, in-falling observers would encounter a literal wall of fire at the event horizon, burning up before ever getting near the singularity.

At the heart of the firewall puzzle lies a conflict between three fundamental postulates. The first is the equivalence principle of Albert Einstein’s general theory of relativity: Because there’s no difference between acceleration due to gravity and the acceleration of a rocket, an astronaut named Alice shouldn’t feel anything amiss as she crosses a black hole horizon. The second is unitarity, which implies that information cannot be destroyed. Lastly, there’s locality, which holds that events happening at a particular point in space can only influence nearby points. This means that the laws of physics should work as expected far away from a black hole, even if they break down at some point within the black hole — either at the singularity or at the event horizon.

To resolve the paradox, one of the three postulates must be sacrificed, and nobody can agree on which one should get the axe. The simplest solution is to have the equivalence principle break down at the event horizon, thereby giving rise to a firewall. But several other possible solutions have been proposed in the ensuing years.

For instance, a few years before the firewalls paper, Samir Mathur, a string theorist at Ohio State University, raised similar issues with his notion of black hole fuzzballs. Fuzzballs aren’t empty pits, like traditional black holes. They are packed full of strings (the kind from string theory) and have a surface like a star or planet. They also emit heat in the form of radiation. The spectrum of that radiation, Mathur found, exactly matches the prediction for Hawking radiation. His “fuzzball conjecture” resolves the paradox by declaring it to be an illusion. How can information be lost beyond the event horizon if there is no event horizon?

Hawking himself weighed in on the firewall debate along similar lines by way of a two-page, equation-free paper posted to the scientific preprint site arxiv.org in late January 2014 — a summation of informal remarks he’d made via Skype for a small conference the previous spring. He proposed a rethinking of the event horizon. Instead of a definite line in the sky from which nothing could escape, he suggested there could be an “apparent horizon.” Information is only temporarily confined behind that horizon. The information eventually escapes, but in such a scrambled form that it can never be interpreted. He likened the task to weather forecasting: “One can’t predict the weather more than a few days in advance.”

In 2013, Leonard Susskind and Juan Maldacena, theoretical physicists at Stanford University and the Institute for Advanced Studies, respectively, made a radical attempt to preserve locality that they dubbed “ER = EPR.” According to this idea, maybe what we think are faraway points in space-time aren’t that far away after all. Perhaps entanglement creates invisible microscopic wormholes connecting seemingly distant points. Shaped a bit like an octopus, such a wormhole would link the interior of the black hole directly to the Hawking radiation, so the particles still inside the hole would be directly connected to particles that escaped long ago, avoiding the need for information to pass through the event horizon.

Physicists have yet to reach a consensus on any one of these proposed solutions. It’s a tribute to Hawking’s unique genius that they continue to argue about the black hole information paradox so many decades after his work first suggested it.

]]>